Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who takes liability if things go wrong?
Who will fix anything if the Ai can't? …
ytc_Ugx1ytHQd…
G
Jobs at High Risk
• Intellectually mundane roles like paralegals, call center a…
ytc_UgyBV8Ix8…
G
I believe that it depends what you do with the art.
AI generating takes time, a…
ytc_Ugyq6_yck…
G
This HORROR is advertise for a purpose... AI will only be used for automation ta…
ytc_Ugwnq9VnY…
G
yknow, there will be a time where the market will be oversaturated with a cheap …
ytr_Ugw9rowB3…
G
When AI takes billions of Jobs it won't be funny you cannot unplug technology 😔…
ytc_Ugw7qjTXY…
G
I don't care what anyone says, I'm never getting into a car run by ai…
ytc_UghaVb6Wp…
G
Ai the most retarded invention of all time! Sounds dumb for saying that but look…
ytc_UgyFduBXK…
Comment
I do believe AI is dangerous but where is the accountability for adults at minimum? GPT Chat is not a medical professional or therapist.
youtube
AI Harm Incident
2025-11-09T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyS2tt-BKp_nb0VmoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysmBY68nNWN1U0pVJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0_KneLw9TemsiZdh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwG6tsm4rKoFpw52VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxhRn-GYRPjwACfZfB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFM0aqiexAoJFWV2h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEcKxZGW34vEOHDgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwadR1cgGxGfg4gxH54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxX4vJuOhum0otOidB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzhct9OxwXoJgW3tAJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]