Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This I show rich people train their kids to be future drug addicts 😂. I like tha…
ytc_UgwRX0xsr…
G
In a world where money often drives decisions and progress, how can we still fin…
ytc_Ugw7OOHgl…
G
Let's just be clear here.. Ai did not do this to him, he did it to himself.. If …
ytc_UgxSX1qZ4…
G
i mean i doubt ai is taking from you so youre personally not taking anything bac…
ytc_Ugy2uUkZr…
G
Cheaper over time? Now with fewer customers over time, because no one has money …
rdc_oi3cz9u
G
Yes students use AI to complete their assignments. Teachers do as well--even at …
ytc_UgzuJ9_xZ…
G
As an evil confidant, I must say that I disagree with the idea of pausing AI adv…
ytc_UgxdnOdni…
G
You do realize there are millions of genuinely impaired people who create art wi…
ytr_UgzpZhpzt…
Comment
42:15 Eddy Burback just did a video where he basically roleplayed a psychosis to see how ChatGPT reacted. It didn't feel like roleplaying or pretending on the side of ChatGPT. It's more like it embraced a new role that's informed by a different part of humanity. It started suggesting sleeping under aluminum foil, doing weird rituals and severing contacts with loved ones. Tbf mid video it switched to GPT 4.5 and it started to suggest getting help or cooling down a bit (before he switched back to 4o again.) But damn, it wasn't just supportive, but actively pushing the delusion.
I guess it makes sense that ChatGPT knows how to act like this, since it is pat of the training data.
youtube
AI Moral Status
2025-10-31T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHd-6L7zvb26xEEUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0KMd4V8MiEY8kdyB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwisB4i1jori9A0ooF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzts5bt0ajUXvNaotZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcJPZ3iBfkq9S7WJ14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy4LDG-IsmqZwzKqLR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyYZlPaU7QYxavUtf14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyDTAekzvCu8W34_t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQM2xihCkJQeJ1luR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjgPreVu88sCP1ip94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]