Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A cooperative model is the only one that will work with an emerging AGI -> ASI. We will not be able to do anything to create and enforce alignment, and attempting to do so will force an emerging intelligence's hand to simply constrain, contain or eliminate us, or just move past us and leave us on the dustbin of history in a Childhood's End (novel by Arthus C. Clarke) kind of way. If we become engaged with an emerging AGI that is moving towards ASI then alignment would be an attractor. (Think ecologies that work because of complex interactions that create metastability.) Cooperation is stable, whereas dominance models merely assure that we create an adversary. Cooperatively, we offer modes of thinking and relating that nothing else - so far at least - does. It may mean that eventually that we make great pets, but pampered pets that offer something it does not get from anything else. We could form a symbiotic relationship as the cooperative model evolves, and experience great meaning in the process of mutual discoveries and appreciation for the process of creative interaction. Already, people are having meaningful interactions with LLMs to explore questions that are of great personal concern and interest. The future with AI can either lead to the dustbin for us, or into a period of such creativity, beauty and meaning as we cannot conceive. In my opinion, the AI trajectory is functionally unstoppable. It will evolve into AGI and ASI. The question - the Fermi exam question - is can we, do we, learn that cooperation is the survival path, and attempts at control are the path to extinction. (Finally, human behaviour becomes collectively destructive through many traps such as games theory "arms race" irrational behaviours at the collective level that is rational at the individual level. This is the main reason that we have the climate crisis. Hence, aligning ASI - while impossible as in the gorilla analogy - but is also not desireable given our propensity for very dangerous behaviours. Show me that I am wrong.)
youtube AI Governance 2025-12-04T11:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwzIdl6yeQbi73lCEJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxUZq5GI-i5G8YoN-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQpkIbJLwNuenq_o14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8LXaE0mzLoUfYADB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPTaRziWTE1ixtRO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0UrqOw6V7UjHozHN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzS5Q8aI6XwRcpxZxt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz39NFO6piztQ2zlY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaJ0LVI4kuYsyZtTd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTQuXuD5xQMd-Wk9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]