Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My husband's workplace replaced all of their devs and PMs with AI. They were ha…
ytc_UgyMq2fS7…
G
Everyone freaking out when I've literally always thought what the ai just said, …
ytc_UgxKM19qt…
G
I wouldn't want Hegseth to have a scapegoat to wipe his hands and clear his cons…
ytr_Ugym4Zun2…
G
If content creators and YouTubers get to have their entire videos/channels and e…
ytc_Ugw8FkjXc…
G
It sounds like humans are giving themselves more leisurely time with the super i…
ytc_UgywNEQUt…
G
The video contains multiple false claims and unsupported conclusions, repeatedly…
ytc_Ugw72l4Fq…
G
I'm not making any AI art and I never will, yet I'm certain that I'm unable to e…
ytc_UgzinRwKP…
G
Can anyone list the 5 jobs they discussed that wont be replaced by AI ?…
ytc_Ugx0g9vr6…
Comment
The problem of chatbots is they don't actually reason, they just try to auto complete how a person would talk. This is probably also why it doesn't see "ChatGPT" and "I" as the same thing when defending itself.
Another problem with that is that if you argue with it, it can confirm something that just isn't true... This problem becomes even bigger when you use a lot of analogies because it can't actually reason so it will fail to recognize where the analogy breaks down. My suspicion is that ChatGPT probably told this person how it was used and then the person explained his thought process and ChatGPT probably said it made sense (at least after a few tries).
youtube
AI Harm Incident
2025-11-25T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzcAWmS-wUP7k8140N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwHgP-wNqNuQ3O1exF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmH4s0r4lIAb7q4H54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOCEYUVu24SYpbMcV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzM8ID5Td8xyfQ9LUR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLJhQ18dDU-fAGyDN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCd4_ca2PSfU8_LnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx7JlRuW84BQ7tRjCd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyWzlt69jznoC6dVyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwIA-mHu9xKInv1j1h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]