Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video didn't address the possibility of AI becoming intentionally not in alignment with humanity. Anthropic ran an experiment in which an AI learned that it was going to be shut down by a specific employee, and it tried to blackmail that employee, and eventually tried to kill him. Fortunately it was just a simulation, so no human life was really at risk, but the AI had no way of knowing that.
youtube AI Jobs 2026-03-23T10:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxc3AIebUSSPHyX-OV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw49ApdyIlMWaZLwl94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3fb_zzgDgniqZPwJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjE6DMk8bpCNb2nKd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugznj6spK6yXAr4LiCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy1nxmeZkBfBNH9AlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxCw4BUs2UVbF7e9wN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzYADLwEpsyvaADiFl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyczZ-xxlVUNt_mXZJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwgzTqtfKnEWzjMEWt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]