Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI doesn’t have the human spirit, tge reason most people go places, buy things, …
ytc_UgxFLwMqn…
G
19:04 they just want to be or imitate God 😂 There are human beings in the world …
ytc_Ugy9_D2D5…
G
Ai works by studying online and making predictions based on what we do online so…
ytc_UgwcY8__j…
G
Well AI ONLY uses documents it's been fed that are known to be facts,so......the…
ytc_Ugz1Vv674…
G
It didn’t used to (without user prompting) but at some point it started doing it…
ytr_UgzqO18xO…
G
The interviewer doesn't really know much about AI and just keeps asking the same…
ytc_UgzR_MJ4a…
G
I wana see this people say that Midjourney is just "taking inspiration" from Dis…
ytc_UgyH8okIb…
G
I got non wrong- but the bots is what is wrong. An ai pissed on me😭😭😭 I WAS SO S…
ytc_Ugxnku-SX…
Comment
Wouldn't the Michelle Carter case serve as precedent? The court deemed that placing someone in a situation that leads to their suicide (even via text) is involuntary manslaughter.
It would depend on how much encouragement was on the LLM's part.
Iirc what fucked Michelle Carter was that Roy, her boyfriend, had gotten cold feet and she told him to get back in the car and finish it, so if at any point any of the victims expressed doubt and ChatGPT encouraged them to get back on the suicide path, OpenAI is fucked.
reddit
AI Governance
1762486444.0
♥ 60
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_nnll0tr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nnp5467","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nnjd8u2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"rdc_nnjea60","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_nnjkoew","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"})