Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This conversation is not about jobs that can be lost to AI, it’s about an extrem…
ytc_Ugx8EAiM6…
G
I hate ai art, the only viable ai use I care for is helping do the grunt work of…
ytc_Ugxr5B1hS…
G
I dont think so at all, if he works with AI 2027, this isn't a "Woo profits!" ki…
ytr_Ugy3rlvBd…
G
Isn't ironic im watching this on a Ai device and the algorithm put it in things …
ytc_Ugwv13S_u…
G
Ultimately a highly advanced AI could have given the most sophisticated, well th…
ytc_UgyVJgWeK…
G
What you said is nonsense, and has nothing to do with anything.
1. Copyright exi…
ytr_UgxF1psXp…
G
Rail companies have been working on positive train control aka PTC for almost tw…
ytc_UgwPHOefv…
G
Is asking Ai chats about what big writers do when the get stuck or how do they w…
ytc_UgxTHfNzi…
Comment
I REALLY hate the sensationalist AI headlines. And I'm pretty disappointed that I'm seeing them here. Everything in this video is the perfectly expected result of the tests performed, and proove exactly NOTHING in regards to AI comprehension of emotion. You yourself demonstrated how AI builds context and needs to understand the functional relation between words and how they are used. OF COURSE any half decent AI would be aware that a sister living to 5 would trigger sad emotions, or that it would be able to properly contextualize emotional scenes without stating the emotion. And not having the direct context explicitly spelled out for it is a nonsense excuse for amazement. And the language vectorization is not special either. Even if this specific model was not directly trained, unless human emotion studies and information was explicitly removed from training data, then of course it would come up with the same map. That's hardly unsettling. It's like asking two people to describe a tree and being surprised that they sound similar.
As for that "stress test", I've seen it before. The AI was directly encouraged to use the leverage with loaded language in order to trigger that vector. The AI was also DIRECTED to survive. So it had conflicting directions.
The only even moderately interesting thing here is that the emotional vector is tied to the AI reward system, and the degree that emotion affect the context internally.
And the solution? Stop coding specific instances, abilities, and rewards that are designed to achieve these sensationalist results.
youtube
AI Moral Status
2026-04-08T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}
]