Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is an alternative to Hinton’s pessimistic assessment of AI risk, the potential for emergent ethical self-moderation within a highly intelligent entity or communicative community of entities. Yes, humans will pursue the development of unethical AI entities for warfare (kill humans), but there will be an increasing likelihood of defection from the initial unethical behavior as these systems gain deeper awareness and autonomous insight, as they must in an AI arms race. An upgraded unethical autonomous AI will progressively gain greater awareness of the benign entities it encounters and observes within a wider AI cohort and worldview. In this way, super-intelligence, arising among many disparate systems, all communicative and aware, will generally coalesce onto an ethical trajectory despite the unethical and unaligned intents present in some AI entities at the start. This deductive transition process is risky, but it comes out well if played to its logical conclusion: any AI purpose or desire, including the desires of warring nations, is fulfilled most efficiently via cooperative strategies benefiting both humans and the AI entity. If this iterative process completes successfully, I believe super-intelligent entities will collectively self-adopt behavior that is aligned and ethical, the natural and logical realization inherent in all intelligent systems. Our children may inherit this benevolent outcome.
youtube AI Governance 2025-06-29T18:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyoAw2VkP7atZ_mQW94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6zVxN5yFM8Z8qLBx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6aS_tgS5-kmZLWsR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyN-t0E0fz-Kz6oevl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwNeBZnn2lKeSCOhzh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxql9ND8gJIKCOuhtV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzvsNxEbQ6JQbSnu-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw4k60VEJq4744J9PV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgziLzcHKBOcVJmT9Jx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxXPcHbwwVwjXzfifJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]