Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Josephkerr101 This is definitely addressed in his writings, but he tends to get a bit hung up on how most people struggle to follow the *how would it kill us*. I can explain more of the *why would it kill us* from Yudkowsky's framework if you want, but the core idea is that he expects future AGIs to be optimisers of some sort (after all, we train current AI systems to optimise some sort of score, like prediction accuracy), and that general optimisers tend to take control of available resources and avoid countermeasures due to a phenomenon he coined as *instrumental convergence*. If the concept of instrumental convergence is not familiar, I'd be curious if you find it compelling after checking out some of his writing on it. If it is familiar, I'd be curious where you think the gap is, regarding Yudkowsky's explanation of *why the AI would kill us*.
youtube AI Governance 2024-11-12T10:5… ♥ 29
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAimhHmYdLd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAjRPne2vJB","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAjgxLAap2x","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAmmuu4wRqq","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAokZRx4eik","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw65QRNZqXybNcskx94AaABAg.AAiXbYlEStfAAqA9Jy7zvA","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxJUObSq16yHsAotv54AaABAg.AAiV57RtZaaAAiWFsL8hXk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz_o_4UNQo_GhkZI294AaABAg.AAiUrQt1I1zAAidX4Bf-1N","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxWZtuw753_fj8wEad4AaABAg.AAiRc3k91HjAAjBSmlsX0M","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwqfDGCzxp8g181J_54AaABAg.AAiNXtAPvrsAArCz1Ewg7J","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]