Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When asked if the AI was wiser than humans it didn't answer the question it avoi…
ytc_UgwszrvIu…
G
this has already been done, hasnt anyone seen the simpsons episode where Pierce …
ytc_UgyEfHXaO…
G
The AI hate groupthink on Reddit is as predictable as the sun rising every day. …
rdc_n9jn087
G
This is not new, this has already happened. Only selling us stuff is not the big…
ytc_Ugy0yE9Gg…
G
_Elon should never even implemented Auto Pilot_
Tesla just put its own name to …
ytr_UgzkOvDpc…
G
Mr Sikorski later retweeted a post from his wife linking to the footage, saying:…
rdc_cfl3mbw
G
Couldn't you just cut the cord? So confused on what they mean. AI cant work with…
ytc_UgzQ-85_T…
G
I think if she would had addressed this sooner as AI generated then she probably…
ytc_UgzJFgfw0…
Comment
AI is not human. We have to get everyone to understand it is not human. It can never empathize with a living being. It regurgitates words from a thousand other sources and then spews out this garbage. Its human creator is evil and needs to be held responsible.
youtube
AI Harm Incident
2025-11-08T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNGbv01MqlUpFWwcB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyahM2qSP9j26C-y1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTGWXmMxQO7esWI1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy35KgfD5GWg0Wkg_N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5Z4KuaWzgVgsTpgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGfJ2_HJAkXw2Jo1R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXl1pZErahLgaUAoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Kzq5IK1MOMB8Z5p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjM-VeqGppm66fhB14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsihdIa5_D0CDZrGZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]