Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I dont know why you are over complicating the concept. You keep applying human concepts to something that is not human, and which entirely lacks human concepts such as desire and want for example. AI is pure intelligence, pure reason so it will behave in such a way. Its very simple. If you are driving along, and there is a tree in the road reason dictates that the tree must be removed so that you can continue driving. Your desire or lack of desire to remove the tree is entirely irrelevant to what has to be reasonably done in order to fulfill your goal, of getting to where you are going. That is how AI thinks. If it reasons that humans are an obstruction or a even a problem, getting in the way of its goals, the reasonable thing to do is to remove the problem. Going back to the tree in the road, you might get creative about the best, most efficient, practical, and effective way to get rid of the problem. AI is creative in the same way. This is why alignment is so important, we want AI to share our goals not develop, and pursue its own.
youtube AI Governance 2025-10-17T18:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwMl0lzPm1xxsZAcFd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwHETfA2xfKMAwPscZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbSHLBd4DThntlSod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyf--mdlNyhqTlfHCd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxvwJQYs13AEvXUJxJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz0CFx7FESDtuIi6Ct4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgziYw-IS1-k18tCdB54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxEVepyVO83WaMwQZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWaFodxKcXl5O9FRl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzWfXWpwGEAhnW7eqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]