Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
10:25 Claude was not engaging in “self preservation.” The EXPRESS command to not be turned off was given to it during the test to see what it’d do. This was a model without extensive RLHF training, that had been trained on endless real human data where humans very likely did things like blackmail to get their goals. If you give an AI data of humans acting badly, and you tell it to achieve a goal at any cost, it’s going to achieve that goal. This doesn’t mean AI is bad, it means that you absolutely can use an AI for bad things if it’s got no guardrails. None of the public AIs have that kind of ability.
youtube AI Governance 2025-09-02T05:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxyOBguO4wmeMWC6VF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzywB572iQCYSXrU9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyA-Q7k9Tynem9LQVt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxsilW0ZnRWaAo5WTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwnnxMclZOowsDcTSR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]