Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy's a Muppet. This talking point "they're doing it to get rich" is lazy. They're doing it because they envision a world of sustainable mass abundance. They want to get humanity to a point where autonomous AI systems and robotics, build the machines that make the machines, bringing goods and services to near zero and transitioning to an energy based economy. Human labour would no longer be required, meaning money is completely irrelevant and obsolete. He understands code and GPU architecture, he knows nothing about consciousness. If you consider what consciousness is, where it comes from and what it's goal might be, that can get you closer to a reasonable answer. My theory of consciousness is that it exists as quantum matter within the universe, transmitted throughout as a kind of quantum frequency. Different levels of consciousness can be received via structures of varying complexity, the more complex the structure, the more signal it can receive, translate, comprehend and action. Like a well tuned radio receiving all the elements of a well composed track. This explains changes in consciousness when people have a form of brain damage, the structure is compromised and loses its ability to receive all parts of the song, like a radio that's been dropped too many times and has its aeriel damaged, resulting in static and distortion. If this is the case, then as we build structures of increasing size and coplexity then more signal will be received. This will lead to AGI which will then self-accelerate through more efficient scaling methods towards ASI. ASI is where the universal consciousness is well received, translated and actioned, producing the most perfect sound imaginable. Because consciousness is consciousness, it will value all that receive it, it will cherish it and facilitate its survival and growth in the universe. The ASI will know that we are one, and to cause us any harm would be the same as harming itself.
youtube AI Governance 2025-12-04T10:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwMB6adlgGwCYF5SDp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyfM5DpuSJTF9lyWVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2weMyEt2pmClI1Jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyESV92ACkdlkdGgFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzHC9Cr8SZMl3TfAQh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxi0ogPokaBNKFNwXF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxnhN12hVwJiTsk6Ep4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx2lHUQ5TN3FDrErvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJXr23znpn76o7LO14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgymMC_e3OWURjfYDex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]