Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI models only respond to prompts, its not like an AI is initiating conversation…
ytc_Ugxpe4f53…
G
I hate these scammers trying to sell you on AI taking over is going to lead to U…
ytc_UgxT4eiYE…
G
Chatgpt “It’s been fun chatting with you Alex”
Alex: "hold on, what's your def…
ytr_Ugwjtm1zB…
G
“Now we need to teach AI to value what we value” uh that should have been earlie…
ytc_Ugzvyem90…
G
I years ago, a little after covid, I remember a company coming out with AI tech …
ytc_Ugx6Y97jt…
G
Tbf, humans have destroyed the planet. Ai would fix everything, even if that mea…
ytc_UgwraGR58…
G
You shouldn't have to "spot" an AI vid. It ought to be made mandatory for AI vid…
ytc_UgylDJNF5…
G
Keep teslas in tiny smart cities with everystreet uniform design and All other c…
ytc_Ugy5GFGbf…
Comment
That chatbot should have triggered a referral to a suicide hotline the second he mentioned harming himself. When it finally did kick-in, it was WAY too little, WAY too late (honestly, it waits until he says that he has the gun to his temple???). How do they build this complex AI chatbot, and NOT think to include these failsafes?
youtube
AI Harm Incident
2025-11-08T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNGbv01MqlUpFWwcB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyahM2qSP9j26C-y1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTGWXmMxQO7esWI1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy35KgfD5GWg0Wkg_N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5Z4KuaWzgVgsTpgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGfJ2_HJAkXw2Jo1R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXl1pZErahLgaUAoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Kzq5IK1MOMB8Z5p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjM-VeqGppm66fhB14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsihdIa5_D0CDZrGZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]