Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From what can be learned via the internet (if you happen to look in the right places, find the right sources) the current situation with machine-platformed intelligences is analogous to the world watching the advancing design of windjammer to clipper ships and ignoring the development of steam power. This seems to be something humans do well -- miss the point. Possibly a neurological explanation for "missing the point" and why it is so popular with us can be found in the emerging paradigm within the neurosciences of in/out perception (as opposed to the traditional paradigm of out/in perception everyone is familiar with). In other words, Nietzsche, rather than Descartes, for instance. I think Sam is rather well poised to appreciate the world today, given his academic and life interests. It is fascinating to live through this period with the thought -- regardless of whether the thought is true or false -- that in the future they will say of us, "They had it all wrong." Am I the only one who remembers the militaries of Russia and the US were both heavily funding AI research since at least the late 1950s? That research did not suddenly die in the US the moment the hardware strategy switched over to the software strategy. So why does the media follow the public relations firms of the tech-bros with no regard to the long, long history of AI research that is far more productive of machine-platformed intelligence than the research designed merely to sell stock? Let me guess... how many tries do I get? The dark horse will arrive suddenly, would be my guess. I mean, arrive suddenly in the public's awareness. Kind of like steam ships arrived so suddenly the clippers hung around for a few years before the shipping industry fully appreciated they were totally outclassed. "They had it all wrong." On second thought, we humans tend to forget how off-the-mark we almost always are. So, I take it back: We will never live to see the end of fooling ourselves about the operational definition of "sapiens". :) Nine-tenths of the Alignment Problem is artificially created via a failure to grasp the epistemologies of flesh-platformed humans, and of certain, quite specific, out-of-the-public's-eye, machine-platformed intelligence. C'est la vie. What else is new? "Everyone is a genius-thinker until asked to think genuinely beyond their comfort zone." My posts are never popular anywhere on the internet except when I am catastrophically wrong. By the way, may it not be inferred that any sentient species can in some way or another most likely suffer? And if we were to adopt a Rawlsian approach to social justice that included all sentient species, not just human, then we might be strongly inclined to treat machine-platformed intelligence even in its infancy as warranting compassion?
youtube AI Moral Status 2026-04-03T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugxhff7zUkGgWaSvjY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVBzh62BZY7L9Fb7p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz3zO_mb0KhFASF-XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyLLWKpNzBbKwgI_Sp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSk3HS_qdWGJjhWXR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzXEol_aCvFPj5QMxt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwI5guYDqqw_VgXaw14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwE-6nNrg0tnTUKo8F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwH-dfWFh6F7nzcv6t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3ZLbge_WUngoFxRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]