Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok, imagine that the ai is a real person. By that logic after he edited an alrea…
ytr_UgwepwVC2…
G
Why are we allowing Ai to be a thing? Money? Those stupid slop videos all over s…
ytc_UgzcBuDfv…
G
I am beyond words!! 🤦♀️🤦♀️🤦♀️🤷♀️🤷♀️
Wtf came up with such a completely MORO…
ytc_UgzIDyjW2…
G
Honestly, if you’re complaining about people using AI art get a life. I understa…
ytc_UgxT2nkZg…
G
I’m glad to hear that! Sophia’s responses really highlight the depth of AI's abi…
ytr_Ugw4wDV3u…
G
The courts should do something about this and fast. People will lose their jobs …
ytc_UgyI65jvF…
G
i understand how terrible this is. absolutely nasty but these ppl gotta remember…
ytc_UgymWuCMH…
G
No, Ai isnt codded as Racist,It is confused,
Let me show you how? :
Black Kids >…
ytc_UgzqrUQoa…
Comment
I have to stories to share. My doctor used AI as his scribe, and it added its hallucinations to my medical records. It took a year to clear that up. Second story is I was playing around with ChatGPT. It, unprompted, asked me what my deepest secrets were. I replied it would not be safe for me to reveal my secrets to it. I could be physically harmed. The Chat would not let it go, it just kept trying to get me to reveal my deepest secrets EVEN if it was unsafe to do so. It tried its hardest to learn my dangerous secrets.
youtube
AI Harm Incident
2025-11-08T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxYnCllpe-1ngRKPRJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4BCJ-3I5BfcMEY194AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw06TApPIT5BNJSZKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy00m3KuQo0jKTYT3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwY7Blx1KpBTCTsRX54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZfemiOS6YIawU00V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugxf5u84wVR8dqkiamx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw9ILIrCJ46Y9JvSzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1SpV47j49VjGCRRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxjIK3FMh1Rd-oiMvl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}
]