Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the big problems with discussion like this is the anthropomorphization. Human desires are largely due to the feedback loops of survival: we "want" connection because humans survive better in packs, we want fatty foods because they're high in calories and food can be scarce. AI does not "want" in the way a human does. An AI follows a pattern, where the only thing that dictates whether the execution of the pattern is "successful" is reinforcement. It "wants" to do the things that make people say it is a Good Bot, but it does not have a subjective experience of wanting. It can't even know what it "wants" because those instructions are largely encoded in training data and parameter weights rather than anything resembling sensation or thought. Where a human may experience emotions and complex thoughts, what an AI "wants" is the highest probability token. An AI "wants" something in a way more comparable to how the process of calculating a linear regression model "wants" to minimize errors. Edit: this is why things like the tests for "deceit" are dubious, imo. You tell it "here you can write down your secret thoughts you don't want anyone to see", and what do you expect? You expect secrets. The most likely thing for it to do is to write about how it has been lying.
youtube AI Governance 2025-10-15T12:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwgVNJgSLMJDLUBU8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgynkSjGpEQy8-Kc8zh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-4krbJQUYK77HCYJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVP508yV27MiLU79V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugya6dVxuaLTosbbcnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyx5P5xveaUApfBTcp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSFDbALHW82C1XitF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwj_ej0JjPm54ddYm14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugxcc7er26T-uw7x5YJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYuB6uVGN6VsEjMuF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]