Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy's Joker arc should be studied. I've never seen such a descent to madnes…
ytc_Ugw4qGFIw…
G
Can you just download and glaze ai art to create a bigger pool of toxic imagery?…
ytc_UgzlxPLg1…
G
Now AI will learn from this and do better to fake being human. Congratulations a…
ytc_UgymVgYSZ…
G
She said "Is it more likely" therefore forcing Chatgpt to answer in a specific w…
ytc_UgyosSOSK…
G
Yeah but still AI makes so many mistakes and is not usable for so many simple ta…
ytc_UgzQwDe4P…
G
Every AI generated image is depressing when you've spent 18 years on a dream tha…
ytc_Ugyo-OEkG…
G
Mirren, actually there wasn't, the car was in autonomous mode, do you think a dr…
ytr_UgyFBknqQ…
G
My phone is just a tool for pizza-making because I have to tell the people on th…
ytr_UgxTIDHfl…
Comment
Yes, AI can be misused — like any powerful tool.
But blaming ChatGPT alone for tragedy oversimplifies what are usually deeply complex, heartbreaking mental health battles.
I’m one of many people who benefited from AI during a time of intense personal and professional struggle. It helped me write, plan, market, and rebuild my small business when I had no one to help me. It taught me things I never thought I could learn.
AI isn’t perfect. But it’s not the villain here.
Let’s focus on building safeguards, yes — but let’s also honor the many silent ways this technology has helped people survive.
youtube
AI Harm Incident
2025-11-09T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwTQdYPaO7mTBV6wFx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxzfKzRCkKv9TinQG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwScyw5XLAfeU48tqZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGWvHo_AbcYD-Xqc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwzr_uNJRS2Y5YA0tx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxS-fxrqnsiczBLFkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxONfN4mBI_yHbS_dp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw01GNaxolLhXGJxEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYyYt3aw1-Dkg2kXR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU8SoXHz7uC8Fu4Qd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]