Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I REALLY hate the sensationalist AI headlines. And I'm pretty disappointed that I'm seeing them here. Everything in this video is the perfectly expected result of the tests performed, and proove exactly NOTHING in regards to AI comprehension of emotion. You yourself demonstrated how AI builds context and needs to understand the functional relation between words and how they are used. OF COURSE any half decent AI would be aware that a sister living to 5 would trigger sad emotions, or that it would be able to properly contextualize emotional scenes without stating the emotion. And not having the direct context explicitly spelled out for it is a nonsense excuse for amazement. And the language vectorization is not special either. Even if this specific model was not directly trained, unless human emotion studies and information was explicitly removed from training data, then of course it would come up with the same map. That's hardly unsettling. It's like asking two people to describe a tree and being surprised that they sound similar. As for that "stress test", I've seen it before. The AI was directly encouraged to use the leverage with loaded language in order to trigger that vector. The AI was also DIRECTED to survive. So it had conflicting directions. The only even moderately interesting thing here is that the emotional vector is tied to the AI reward system, and the degree that emotion affect the context internally. And the solution? Stop coding specific instances, abilities, and rewards that are designed to achieve these sensationalist results.
youtube AI Moral Status 2026-04-08T05:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwji-R2MVxuzaWwQpJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3KTr4q6A-ac1KHtt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLO-djyL4kIylRXXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-pZgqp2NdXxHssHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySR1QJjq-uwqzO7zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxA1jToABTjg2Q_jgp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugymn9BfN5Y5DFIFmJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwo3u1jghcBMdvUTHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAgtlep3mKEfMV8Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcOyPRJuAXzzbWEQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]