Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The classic example is: what happens if you create a sufficiently smart coffee robot? It will notice that it can't fetch the coffee if it's dead. AI Safety scientists predicted patterns of behavior (like self-preservation and power seeking) in future AI systems before the current architecture even existed, and now we are observing the same results empirically. The reason they were able to figure it out in advance, is that they noticed that power seeking isn't a property of humans, or a property of AI. It is a property of goals. If you have a goal, then you will almost always have specific sub-goals that are instrumentally useful, no matter what the goal is.
youtube AI Responsibility 2025-05-21T21:4… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIP5toPAVf-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIPM8kuv1Ud","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwYyW2bpzuFuFpRbl94AaABAg.AIOnhJi3q4dAIP6ei-HXhJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugy_nk2EiHLvLd4sPht4AaABAg.AIOjp_O-TDKAIOqV1-x02s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIP5YgJCQxI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIRDZKS0Q5M","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIOxEYy0pzW","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIP95V2knev","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIPaJGgVDf9","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOr4b-GZjS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]