Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The difference between those who understand the mathematical underpinnings of mo…
ytc_UgxWq-_Wo…
G
My dad has always been pro-AI, and he said people who get replaced need to just …
ytc_UgxvTW0iz…
G
Hahaha merging AI and robotics some of these people so loud too rich and real wr…
ytc_UgzXSe3W2…
G
Well they see how the Witcher went and decided AI can probably copy the books/ga…
ytc_UgylZTgbO…
G
How many millions of people just accept the current thing as normal? These compa…
ytc_UgxAn51JA…
G
Wish AI can come up with a better battery chemistry that is half the price and h…
ytc_UgxCMXHAp…
G
Indeed, AI can excel in certain tasks due to their programmed efficiency. Howeve…
ytr_Ugz-UvTFQ…
G
These dummies voted for trump to save they're job and he's giving big tech and s…
ytc_UgxdM5VMV…
Comment
I think one of Hank's best points was that the models tend to talk to users in a "roleplay" mode that the user doesn't always realize isn't totally serious. If you watch the recent Eddy Burbank video where he ends up meditating next to power antennae while covered in tinfoil (after following the advice of a chatbot for several weeks) you definitely get the strong sense that the chatbot is somehow "roleplaying" that Eddy is having a cognitive awakening. Of course, he is also "roleplaying" that he's taking it seriously. It's fascinating, and I wonder if people who study writing or literature might one day have something to say about why this sort of thing happens, or why it is or isn't effective or dangerous.
youtube
AI Moral Status
2025-10-30T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwiNphKFW9X1-QaJ-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRjAa1xY9Z5cAgqhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHJqxEZwW92ojEIM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwKEyRf9Efg1gtDGVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzsz86Dgtuqvi6ELtx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwqp2A-ZgRV4MaerRt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwJPHWUcnvJotZFqnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2h4n1cyMj8mxDYGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyAJ2kAfyBWrCvGR6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgymcRj0Dpo-ThynfKx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]