Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think people fully realise that ChatGPT is not the highest form of AI. I…
ytc_Ugxd5olZe…
G
@Avenger222 not me: I happen to live in *country name redacted* where most ai co…
ytr_Ugxrg90pT…
G
I don't think it's wrong for AI to steal art.
At the same time, I understand th…
ytc_UgzZfc1Wo…
G
I uploaded a image that I drew into AI and told it to make a drawing that was th…
ytc_UgwBOJ3pi…
G
The Robot you had one Job & you failed it & go's wild I don't think the human wi…
ytc_UgwvSxnB-…
G
What congress need to do is create laws and it loop holes of And company who lay…
ytc_Ugz1pexY7…
G
Fuck selfdriving. If you don't want to drive, take the bus. This technology shou…
ytc_Ugzl4f42n…
G
This is one of the best kind of post here in Reddit i enjoy and read. Nice work,…
rdc_jgheanm
Comment
proof that AI has ever “killed for the first time.” No verified news, official documents, or expert reports support this claim. The so-called leak mentioned here cannot be found in any reliable source, and dramatic phrases like “we are near the end” are only meant to create panic and gain views. Statements attributed to Geoffrey Hinton are often taken out of context; he has expressed concerns about AI development, but he has never claimed scenarios like the ones presented in this video. Serious information about AI safety comes from academic institutions and research centers, not from sensationalistic videos. Always check sources and do not be misled by apocalyptic claims without evidence. The truth is simple: AI has not killed anyone, and we are not near the end of the world.
youtube
AI Harm Incident
2025-07-27T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugz7zLqZDz5vJB6YXvp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz1Xzid4wBrdmrVp6R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzBT8DO80GMzaMHDFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugwd-MsB_jipSiXU57B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy8EY-yjdfOyYGo3uh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzXfAV2lKy53xWiCxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz2Nz-JP6lYJm_oB2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxLmXR6mEJQhcXl5sp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgwV5RQjVB_HrAIuMA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxwXvbJIoj5yAlCeNx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]