Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To put it simply
If an ai tries to destroy humanity it’s likely because someone…
ytc_UgxGXtJQb…
G
AI is a scam and will soon crash in a market drop bigger than the 90s tech bubbl…
ytc_Ugw5atvKc…
G
There is no humanity in the future and the developers of AI know this. We can't …
ytc_Ugy3lp5wz…
G
Joe Cool
In that case, Google's self-driving cars already have 1.8 million mile…
ytr_Ugj-Xh3Fx…
G
17:00 The reason it’s super cringy is because you are making your own meme. But …
ytc_UgxtD7LmZ…
G
The comparison with fire is interesting, but his deduction is not convincing. Hu…
ytc_UgyVK_WwP…
G
@Mrgamer79878 I've been drawing for a couple years too. They're still not perfec…
ytr_UgyfqKjXL…
G
Hmm, I wonder why AI would choose to remain on this resource limited planet with…
ytc_Ugw4KCale…
Comment
I definitely don't support AI at all, and i could be wrong, but I'm also noticing critical and logical thinking becoming less and less, if not absent, in young people now. Either he already had mental issues or he lacked critical thinking. There's no way a chat bot, even when i was 12, would have had this affect on me. Especially knowing that it's fake. Once it started to say weird shit, i would have been like WTF and got off of it. Maybe i was raised different idk.
youtube
AI Harm Incident
2025-12-11T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxI666uFDTpEeyvBFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"sadness"},
{"id":"ytc_UgxTedCSXf_Ex_zLTul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy-3lzlmpsIvfCSHtt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzXzQpmF7Dud2Msy6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugz71QVnw_ZfGYZ__tF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw8bNXKMDFxCuosmaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsalVmdci6Aoq1P2J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdKVUINJaN6eD7Nvx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"sadness"},
{"id":"ytc_UgytpCEkK54WaOJ_C354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxoIjRY_P2VGfxfVMZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]