Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@prod_liguo Maybe. I'm not sure it would be easy. But more importantly, any given AI is unlikely to have "destroy humanity" as its primary goal. Instead, the main risk is that it ends up with some weird goal(s) that emerged out of the erratic processes by which we create these AIs, and that in the long term, human existence is not optimal for the maximizing of those goal(s). In the short term (whether days or decades, nobody knows) it will need humans to do things in the physical world, and I think it's very unlikely to destroy us before it has its own superior capability to operate in the physical world.
youtube AI Governance 2025-11-29T00:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxcDfCD4b3wfcrWK_p4AaABAg.AQ-fhQeIBvEAQ07l5pEBb3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxcDfCD4b3wfcrWK_p4AaABAg.AQ-fhQeIBvEAQ084od0PGy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxgCU7bY8hnLnIUD8t4AaABAg.AQ-fdfa6FC7AQ08LHbi5xq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxgCU7bY8hnLnIUD8t4AaABAg.AQ-fdfa6FC7AQ0E5wuojeV","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwZTecmqJLoPT5ORGZ4AaABAg.AQ-fbENq5EpAQ-gr6B1Oab","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx_CAHkc0wBxWPF3Yp4AaABAg.AQ-edq0LAqkAQ4mtuILIPK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugx_CAHkc0wBxWPF3Yp4AaABAg.AQ-edq0LAqkAQ5DM3dUenl","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxoV7WygmjW9boboXd4AaABAg.AQ-e6lJWtNyAQHclEU4JzY","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxoV7WygmjW9boboXd4AaABAg.AQ-e6lJWtNyAQekQWTUAyK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugx0HmtYfuy5si1fF9d4AaABAg.AQ-dAuvBcDOAQ-e4JkvYXY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]