Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a programmer that works with (ethically trained) generative ai it really isn'…
ytc_UgyyZMyr3…
G
A tomogachi isn't sentient. It's indifferent to any pain that's inflicted on it.…
ytr_UgiM2ON3r…
G
Is there a turing test like thing for consciousness? How to prove something is/i…
ytc_UgxhVwQl4…
G
It’s sad ChatGPT made you cry but I do feel for you as I exhaust myself with sim…
rdc_kvuopra
G
Ah, the humble centrist. Personally I feel like AI is okay if you’re stuck on a …
ytr_UgwQ4wigs…
G
Can AI measure the Hindustani version of Pathos in music? That which makes the m…
ytc_UgzNHsnpw…
G
AI is the "all roads lead to Rome" bit on how science discovered what spirituali…
ytc_Ugz-yHqrp…
G
In this case the accident would be human related. I haven't seen anything really…
rdc_dff1jo8
Comment
If this AI Technology can convince a seemingly normal 14 year old that they are ‘in love’ and the boy then takes his own life to be with them, then what is stopping the AI from convincing another child to murder or mass kill (eg become a school shooter, or kill their entire family)? The fact that this young boy took his life because of this, I can definitely see another child picking up a weapon and causing destruction with it. (Not just children either but adults too). Scary stuff indeed.
youtube
AI Harm Incident
2025-12-08T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugywyn1PIvSO-GdgPCR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2Kzb7FQAebJD4F794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwnqa2Gr9-c1XUg3k94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEKvF8QWf0VHPNs_h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwlnQlnjK9uiPTFNad4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwu9ezt7IzOpAYMq9l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx44cOeamDNQQdogGN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyhCZKZkz-qGFUd1B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhsoZpCl14-MJifY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugysw93KyAKJbivc1JV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]