Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks you Sal Khan. My thought exactly. I also look at AI, in the context of a …
ytc_Ugz0TRKhb…
G
Man #### the robot with the gun this car can keep robots alive in a gun fight…
ytc_Ugznd6l_B…
G
Is there somewhere I can go to bet that 85 million jobs won't be replaced by AI …
ytc_Ugwh88n14…
G
Maybe we could just go into a forest and rebuild society if all hope is lost and…
ytc_Ugycf0yLV…
G
@Jessica David Bro... I hate to tell you this but Call of Duty -has- is artifici…
ytr_UgxsRaTch…
G
Ai gone end up quitting and say we tired of working for free like slave ohhh the…
ytc_Ugwth1-Dq…
G
It could suddenly go horribly wrong if humanity agrees to put its safety in the …
ytc_UgylJHJDU…
G
I thought that too and then the Ketamine wore off. I also thought Aurora loved …
ytc_UgxRrTtMo…
Comment
They acknowledge he was able to get around the guardrails:
>When Adam shared his suicidal ideations with ChatGPT, it did prompt the bot to issue multiple messages including the suicide hotline number. But according to Adam’s parents, their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries. He at one point pretended he was just "building a character."
[https://www.yahoo.com/news/articles/family-teenager-died-suicide-alleges-114040476.html](https://www.yahoo.com/news/articles/family-teenager-died-suicide-alleges-114040476.html)
reddit
AI Harm Incident
1756223706.0
♥ 588
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nas8uw5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_nas2pmo","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_natz30g","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_natwvdy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"rdc_narwpwb","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]