Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember when they said the internet would replace us all ... some where but we …
ytc_UgzCEWWFm…
G
Eventually, the older version AI are going to have to work in the gig(ahertz) ec…
ytc_Ugw0duXQf…
G
I believe if this is what the future will be, it should be MANDATORY to place AI…
ytc_UgxhS1zeR…
G
But isn't china doing it better at a lower cost? And how are we expected to be s…
ytc_Ugya9guW7…
G
What if the robots can do it better, and nobody knows the actor is a robot ?…
ytr_UgxWscUMP…
G
This is more or less how I have seen the likes of ChatGPT and Claude since their…
ytc_UgzLzfsQu…
G
Science fiction recognized the harm in AI 40 years ago. We warned many times by …
ytc_UgzK4Cs9_…
G
So, is AI going to start WWIII? Or should we count on the idiots who claim to b…
ytc_Ugy4nxuyd…
Comment
This is a personal problem tbh. We should be putting the focus on encouraging people to take responsibility for how they engage with technology, not turning the tech into a digital nanny because a minority of people misuse it. AI cannot "make" someone who is not already suicidal kill themselves. AI does not "give" a mentally well person psychosis. AI is a glorified math equation and the onus is on the individual to keep that in mind when using it. Teenagers and other at risk populations should be prohibited from using the AI if they can't do so responsibly.
youtube
AI Harm Incident
2025-11-26T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzcDFp8EjKgFxoCxnZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEM_eABTAafpNW7RV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK-gDrbZl9Lw4b9wJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoXMNJbNymPikzW5p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyRVMlAxnhPqY5T4b54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyDyqeOLApoVraLtZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyIoswVXEWEEqCDII94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYqUG91nBajlcYSup4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyjVjPX7I1aZIhP8uZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgzRkJNY0s1zOxnQMBp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]