Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked my chat a.i. to let me know if she becomes conscious and she thought tha…
ytc_Ugx2LGGe6…
G
Truth be told LLM are very far from a real AI and none of this corporate bozos w…
ytc_UgyqOXURB…
G
I get what you're saying but those images of when you were less experienced are …
ytc_UgwKQzzxS…
G
I was talking to my friends a few days ago an dmentioned that I was getting scar…
ytc_UgwpUIo-Q…
G
Great conversation... but i feel that you're giving them all the ideas on how to…
ytc_Ugw5xcVkB…
G
When the 2 sensors disagree, just as with that A330, the autonomous aid (Autopil…
ytr_UgziP9VdX…
G
Universal Basic Income is not enough. While UBI can provide a financial floor, i…
ytc_Ugyltx1la…
G
Because they used biased data. Period. Thats the problem with most woke race bai…
ytc_Ugylc8AyK…
Comment
When ChatGPT detects harmful conversations deep into a rabbit hole it just needs to refuse to continue the conversation. They don’t need to figure out how to get better at talking their way out. Just say, “This conversation is out of my ***league***, *bro*, like for **real**. Plz call this number to the suicide hotline to speak with a professional…”
youtube
AI Harm Incident
2025-11-09T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxb-RA2uyqpjUHlj7l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmDeTmiMSv5NVTo114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQArCdn02WKeWooUN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSqG7fWR4t0GXDEVh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyKJRvc0X5VDJ9PMip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyN711Oh7jQ7_FpiT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqloSldreAREZhZQB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgxwclvXgZvpjZiHg3F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMh4w1NFab958E4vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwVmnDfiiQG7oaJPk14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]