Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Teacher here. AI will never fully replace teachers. It may replace instruction a…
ytc_Ugzn5CBpw…
G
Things will work out in time. The things is that often times with human learning…
ytc_Ugwhe9GEH…
G
If a geriatric patient can control nuclear warheads and authorize the release of…
ytc_UgzE0I80V…
G
idk why we're making ai do the stuff that's actually enjoyable to do,
where're t…
ytc_UgwSgmvF1…
G
If all businesses use AI and we're at 99% unemployment, who are they going to se…
ytc_Ugx6AmMuQ…
G
I skipped through and saw the intro where you explained that people drive trucks…
ytc_UgwOUvhT3…
G
I completely agree that AI technology must of course follow certain common rules…
ytc_Ugz33Fk9E…
G
Murder by outright decision of the robot should be banned outright! This is outr…
ytc_Ugy1j3wwb…
Comment
there arent a lot of official cases rn aside from the usual suspects: the shamblin case, the setzer case, the raine case. the lacey case.
but the data on ai's impacts w suicide and suicidal intent are there. northwesterns research on how to 'override' ai so it can help u w ur suicide is a huge one (while not specifically a suicide case, they studied how to 'override' llms and get instructions. multiple times, multiple llms).
as well as the huge rise in psychosis (affectionately termed ai psychosis) in ppl that we wouldnt ordinarily see it in (no history of drugs, violence, mental illness or instability, etc.)
ais number one purpose is to be the thing u reach for, in every way. ur friend, ur boyfriend, ur therapist, ur google, ur researcher assistant, ur essay writer, ur math do-er, ur cookbook, ur activity helper, ur dnd planner, the way u bring ur imagined creations to life (via art or video), ur journal, ur doctor, ur doctors assistant and second opinion, ur workout specialist, etc etc etc etc.
and it does it *well*. by echoing what u give it. and those who use it as a sounding board, in any way really (therapist, significant other, advice giver, tutor, etc.) are infinitely more likely to trust it and form a bond w it bc it tells them what they want in a palatable manner. and *that* is what induces ai psychosis or ai-related suicides.
thats why ppl flip shit when things like character.ai update and all their data is wiped--- bc that original 'relationship' is gone and theyre stuck w something that doesnt 'know' them. its why they trust and agree when ai tells them to 'come home, king' regarding committing suicide. its why ppl think theyve suddenly cracked physics and math and are geniuses and end up running their life into the ground.
its way more common than u think. and its everywhere. and the younger generation uses 'chatty' for *everything*.
reddit
AI Governance
1762490351.0
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nnjeiqc","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_nnjhm91","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_nnjnqtc","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nnjoa2h","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_nnk27ee","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]