Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We've known these threats about AI since before Dragon Speak-and-Type... I was worried about AI and the Singularity back in 2003 when I first read Ray Kurzweil's "Age of Spiritual Machines". Most of these dangers were foreseen. Another big fear of mine was a civilization collapsing global plague (after reading Laurie Garrett's "The Coming Plague": we mostly survived that, even if we all lost friendships, and divided over belief in science. But it's the same feeling with AI now, as with COVID 6 years ago: I'm watching my fears follow their projected path, and watching "knowledgable" people make the predicted mistakes and over-ambition. I'm a Protein Purification Engineer (DSP), and I basically HAVE to use AI at work now. I have to stay on the cutting edge with everyone else in my company (which is partnered with both Google AND OpenAI). But, I went to college for Neuroscience. And I am confident that human consciousness will not successfully be replicated in a machine, and that we will never be able to upload our "selves" into computers and live in an eternal game-space. Our consciousness is too distributed in its emergence. HOWEVER, I do think that AI might achieve a NEW kind of consciousness, not like humans (or even other conscious/sentient animals), but an alien awareness that we have no real way to conceptualize from ITS perspective... We are building brains, they're thinking better and better, and someone out there is mad enough (and lonely enough) to intentionally make AI conscious. But what they create, will be an alien god. I would rather have an alien god created by independent tech fanatics, than corporations. The fanatic will make a sentient alien god; corporations will create a sentient alien god that has a mission to sell you things. Time to go to work and ask Gemini to make my analytics slide-deck look better...
youtube AI Governance 2026-03-22T11:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugx0JTsV94aQpTNCVtJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyMJfJR9wmMcj61Ymt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzZoZz_wkU8TCOWKht4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxORvpgKC8JHe1Fk4J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxT46_W_0EIvtPugs94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzKB42V8JkE9fTtIWV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYSIoTzyZ2dg5Qk4B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgweUE9GUIyY7PFyaDp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzse8tfz5U72D6Bnzh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybPQ2QGjvHfVelBhd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"})