Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@voidmammal saving money. I think the monthly subscription to a generative ai c…
ytr_UgxI_6XF1…
G
Until AI is able to move beyond just regurgitating what it's been given, it has …
ytc_Ugy0vNpBa…
G
Great tutorial, so conversational and so easy to follow. I now use Chatgpt on a …
ytc_Ugz2E7a8v…
G
What he's describing this global AI sounds like skynet to me and anyone who has …
ytc_Ugw_bjtVI…
G
Immediately sending this to my friends. They love character ai. (Tip you want to…
ytc_UgxtBJYkh…
G
Now imagine... AI meets...
quantum computing ... A marriage made to dominate... …
ytc_UgyYLT8EG…
G
If this is true, I’m not surprised. As the CC stated, the bias is in the data, w…
ytc_Ugy4oTDBT…
G
15:46 conclusion: humans lie & AI may be conscious, therefore stop interrogatin…
ytc_Ugwx-Grke…
Comment
I don’t think the parents are solely blaming ChatGPT. Like anyone who loses someone to suicide, they’re probably struggling with heavy guilt and questioning what they could have done differently. The bigger concern, though, is that these tools can sometimes provide dangerous answers about self-harm or harming others. No one seeking that information should be able to access it so quickly or in a way that feels personalized to their situation. I think AI chats should have firm safeguards so that responses about self-harm or violence are never generated in any context or with any workaround. With protections like that in place, lives could be save, especially for young people whose critical thinking part of the brain isn't fully developed yet.
youtube
AI Harm Incident
2025-08-26T15:5…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzafhFr892ZeYVAy3h4AaABAg.AMI5_RTdGmuAMIF5tYwX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzafhFr892ZeYVAy3h4AaABAg.AMI5_RTdGmuAMJXjQ0P5zC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxQ8zyOKwbfGiaH5KZ4AaABAg.AMI3oeOP7vJAMLWj3x2KpK","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzWy0LTIHb62T8EjG94AaABAg.AMI3O9B9EUdAMJXUFa7q1K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgzWy0LTIHb62T8EjG94AaABAg.AMI3O9B9EUdAMM4yfAvVzJ","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"indifference"},
{"id":"ytr_Ugw9ET0GRARhZ7oJx4J4AaABAg.AMHzMJsDPezAMKaV9ZkMD-","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_UgyKfyLBocEbE82YyBF4AaABAg.ABhltTJKhHBACHH69ig772","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxQt2HUBTBmZ5WY6-R4AaABAg.9rl1rl6zNX99rr7JDupXrJ","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgxQt2HUBTBmZ5WY6-R4AaABAg.9rl1rl6zNX99rr9P0FDDpJ","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwIUPjP0djvjj6RZ2R4AaABAg.9r-H1DawZsEA11Rnk2u5zz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]