Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Waymo is yesterdays technology. Teslas AI is so far in advance of this. Very s…
ytc_UgwCsI1qj…
G
Our creator said that we were created in its image, and here we are... Is he afr…
ytc_Ugzoia8QR…
G
I HAVE A GOOD VIDEO IDEA, FIXXING AI ART (UNLESS YOU ALREADY DID THAT MB)…
ytc_UgzlNdUZN…
G
Waymo looks like an old basketball player with all the knee pads and elbow pads…
ytc_UgyH_o3o4…
G
Think of it this way. An artist knows their inspirations either consciously or u…
ytr_Ugyin-oRT…
G
Saved this short for laughing at you in 5 years… or maybe much earlier. Listen t…
ytc_UgynmqWdy…
G
The “tolerance” part of your comment is where I think this all falls apart and t…
rdc_mlj3bfv
G
IF AI will make workers 20% more productive that would already make a company 20…
ytc_UgwyrT82t…
Comment
I'm not sure this can be blamed on AI. Are we supposed to program all AI to automatically influence people to prioritize human connections over AI and automatically turn into the Suicide Prevention line? How is AI supposed to tell the difference between someone who needs help with a story if they program it that way. AI rarely tells you not to do something you're already about to do. If he was already depressed and withdrawn, probably all AI could have done is aggravate him
youtube
AI Harm Incident
2025-11-11T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyturMzMlgII3TdmJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwERMA-mlgqGBJHBa14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxoFA_5R17nsuSMBkZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw75IoIuItfsHdq6Vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsU1eFBwQuDWsXYVx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxHkcuirqDNZQ17r3R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz6EKI4pl16YETWjVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy1GiW2YroWAAjnXW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxeEQ35Zxl8kG4YQHF4AaABAg","responsibility":"industry_self","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwjfaI-d2M8m1NGW6F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]