Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@DoomDebates I think having a payload (aka terminal goal) and being aligned are two orthogonal dimensions. We can easily imagine AI spending 100% of compute on recursive self-improvement until certain capability level is achieved, and then changing mode abruptly to run towards terminal goal. Similarly, we can imagine humans reaching certain level of enlightenment to suddenly do something very different from before (ascend?). I sense good sci-fi plot here.
youtube AI Governance 2025-08-24T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwFxGOD0VzdsjR4r0R4AaABAg.9YzujpEfIPS9YzyEOgRd05","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy4UJB489_9BnoymGd4AaABAg.AM9SFMDxJMpAMMTcm3Rtso","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzcI2zdfWJ4NpTHw7x4AaABAg.AMFXI42tL0aAMFXdFgVeOh","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzAjO6KWMde1f8FY0N4AaABAg.AMEf_cuJfeNAMKQEiPrR6C","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgyA65uBpwx0OPX9NdN4AaABAg.AMD7ODg-VOZAMDe4ywoLGl","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz-C5NpvmbiuPMLaWt4AaABAg.AMCptAHY4PnAMXJIbTsen_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugz71U6yLBz5XwYL5rh4AaABAg.AMCg3FjUkO6AMGWTMashoR","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz71U6yLBz5XwYL5rh4AaABAg.AMCg3FjUkO6AMSFUZ1XE6N","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw6xYBBLH8IwOI1teF4AaABAg.AMBiB8mEe9aAMD23PZBfB3","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytr_UgyjbFjVx6bScgzZlSJ4AaABAg.AMAVoTciG9tAMAlREF5Zuf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]