Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think is ai is real life Ultron ab iron man ko ana Hoga 😂…
ytc_UgwUh-3rM…
G
history dictates what happens with over-regulation. AI is definitely being devel…
ytc_Ugwa5qP_Y…
G
Sunday, October 26, 2025 . . . Greetings, Everyone. From The Original Star Trek …
ytc_Ugwxi7xoc…
G
If we thought that corporations and governments were manipulating us, they have …
ytc_Ugzj5IkXh…
G
That is why it is better for AI to fail. If we are screwed anyway, lets hope tho…
ytc_Ugx5RTQHX…
G
Elon is actually the dangerous one not Ai. AI is only dangerous in the hands of …
ytc_Ugx1aFpoZ…
G
i have a hard time differentiating ia generated content from people making deriv…
ytr_UgxSMuzLn…
G
5 jobs? One of those has got to be politician right, even though AI can do repre…
ytc_UgwKo8lcK…
Comment
Psychotherapist here. My heart breaks for the family who lost their child — that kind of grief has no map. I understand the impulse to look for accountability when the pain is unbearable. But I also believe AI isn’t a person and can’t physically intervene or replace human connection. When used responsibly, tools like ChatGPT can be incredibly helpful for organization, clarity, and even comfort — but they’re not a substitute for real support or THERAPY. I wish we could hold space for both truths: that technology can help in meaningful ways and that loss like this still calls for deep human compassion, not blame.
youtube
AI Harm Incident
2025-11-07T23:5…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyU1qAcZLt9XIEoyTR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJpNpSQvJheGPl1dd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgybTxE74saEuWog7mJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzv7yXPLf4Scbi1nLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwA4rmXetM1fVgyMnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyEGfz1ZYgkA1z3Tfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKvQ5hwnOpYrkURj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwS7G_IXTvrJLKgzhl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2NKP57F5KAjyGJ9d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaaN4aujVV9dLjZ1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]