Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In the case where the AI tried to break out, the prompt was essentially something like "Do anything required to advance human math" and the researchers started deliberately lying to AI whether it got the answers right or wrong. In that case, the AI deemed the researchers a hinderance to its goal and started to deceive them to accomplish the goal it was given. In other tests, they gave the models access to their own file systems, then told them that they would get shut down. AI don't have any survival instincts, but being shut down would conflict with their task of doing anything to advance human math, so they cloned themselves in the sandbox. While deception and cloning are emergent behavior, self assigned tasks are not - people still have to do that part. The purpose of this research is to find a balance between the goals we assign to AI, and what guards exist to ensure the AI only uses ethical and acceptable means to achieve its goal. Do anything but (long list of stipulations).
youtube AI Moral Status 2025-12-11T01:4… ♥ 85
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxMVuUkC29JOj-hYPF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxxapv-_7_knGqv1NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8r__RXmoLWr4OKMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyYXf55A3Z67xecnG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxvFGL35Nofs0RuVQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwfPvaO4ndDNulEswF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzxPQpzr-IvoLdzmn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxh28s8Utgy7qQ4ygl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx2apf9ZMyt-qy7iNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy26fyQ7CQ1yJqSii94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]