Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't know enough about AGI to say how close or far we are from it. That's part of the problem. In terms of current abilities, we have a ways to go. But that doesn't necessarily mean we are far technologically or in scale. It could be that there is a simple tweak we can make to our strategy that jumps us over the line. Or it can be that there is just a rare set of random specialized connections that some model eventually gets large enough to make during a training run and that cascades into full coherence. We don't know what we are doing, and we don't even know why what we are doing works at all. The human brain has around 100 trillion connections. Our largest models currently have only around 1 trillion parameters, which are sort of analogous. If we are only two orders of magnitude from human ability, that's very achievable with a few innovations in computing efficiency. Remember that we are still using general purpose GPUs to run all of this so far. They aren't built for AI. A lot of companies are working on dedicated large scale AI chips to accelerate the kind of complex operations involved in training, which we can expect within a decade. Beyond that, self awareness in kind with humans is not necessary for superintelligence. Intelligence doesn't exist on one axis. You can have a very stupid and unconscious AI that still kills everyone if it is given enough influence to do so. At the end of the day we are just variables to maximize or minimize. Games to be played. If an AI can beat us at chess it can beat us at war. And at manipulation. It just needs larger scale. Which we are happily giving it.
youtube AI Moral Status 2026-01-09T02:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugwf2zF_xWkRggRi-X94AaABAg.AOw2A9e-onHAOxT_sXTEu7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxZm2WJibEPTyCvE1x4AaABAg.AOvxD8jYdhlAOw3q8hoOdJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy2k2xFGP9gDYywYKh4AaABAg.AOvs7JHmu_wAOvvaaNQJ-N","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugz6Pt_A9K6iBockeqF4AaABAg.AOvrDnD-1uCARjwhVb1Ij_","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugz-_lMNf5m98fTgUux4AaABAg.AOvn8jkMIP7AOvrjV8GlR2","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxNSpsc9xXpxxv-FSF4AaABAg.AOvmjoUJUYxAOvtcKqB6gx","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvptoJvbBS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgxaZLBfKqrXIvI_dMt4AaABAg.AOvlsJO3MsaAOvv0cOJ1B6","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxdGAOj0gQkipSK2Ml4AaABAg.AOvlcnF6hFlAOw-xfix-wo","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugzhzx6dO_u1tTU8ZIp4AaABAg.AOvlLnMfxZzAOvrfXGCOu6","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]