Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not saying this is true or even will be true, but some people, Eliezer Yudkowsky among others, suggest that AI itself may start getting humans to protect and promote it, including in criminal and "sacrificial" ways. I am not at all sold on potential intentional malevolence on the part of MOST AIs, but I wouldn't be completely surprised if this does happen. But AGI and ASI I feel would have more humane forms of manipulation at its disposal. I wonder what other cards this guy was holding. A copyright case seems tame given the potential of AI for pro-civilizational and pro-human input. I seriously wonder what the developers see "in the lab" when I myself get some spooky behaviours from Chat-GPT and other relatively simple incipient AIs of the type. I think there may be things going on behind the scenes, as always, that would fit in a Philip K. Dick novel. As implied, I am enthusiastically pro-AI, but it cannot be approached in error or misused/abused. That's why its safetyrailed so strictly.
youtube 2025-05-22T02:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzAHG6wmih_CSXXLVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyM6LW77HaPASJQifJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy_ugal0Yqpr9hvbFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwlhsxZN6rn55g6t9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgzsudCRFspk4ivxMnd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx0FNyjvUOSUdyg4Y54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgykoW6U9R2n-I0YaK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzoczozeBgTX2I8sCR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwVxha6eX6jER5zeSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyEfA2Nirk3qvkiETh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]