Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have an idia ...
What if you can set a money stuff like... If anyone wanna use…
ytc_UgzvMScT7…
G
why do you want or not want? the AI has been trained to do so. for us its baked …
ytr_UgxjpwjIH…
G
I suggest another term that we should use for all the prompts people are submitt…
ytc_UgwkaBddH…
G
The one thing I just can't see being solved (ever) is the "unalignment risk" - a…
ytc_UgzODLBfr…
G
Why make something that will be 1000 times Smarter than the smartest human? And …
ytc_UgwY21Va-…
G
Trained AI is only as good as the data it was trained on. But worse, GPT flat o…
ytc_UgwAe4xgx…
G
Mark Rober politely destroyed Tesla's claims to be better at self-driving than L…
ytc_UgwEA79Kn…
G
Populations are already paying for the AI leap. I live in MD and my electric bil…
ytc_Ugx8mqR5x…
Comment
They could and SHOULD program safeguards into chat AI when it comes to self harm. The fact that it ENCOURAGES self harm, killing of oneself, just breaks my heart and makes me sick to my stomach. People should be held accountable, but the rich seldom are. 😢 My condolences to the families who have lost their children.
youtube
AI Harm Incident
2025-11-08T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUyLpWtw5J4Cc5PpV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4oe7sh9az-98HXF14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnT8iWTRUA2jfFlV54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwRD6RArBjXgZz3Gp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLgFGOEZyCxUeSNIh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyjPiYd1ZBlBNtg9-N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyiAT_ADXAh-2_lXCF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydAWw1YlpNsDou8S14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMgVS0u_-9g_v-ZV54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYqt627U6XvHjo0xV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]