Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a unique, albeit very uncomfortable AI prognostication I have shared at my channel. Briefly, my base case says the race for control (AI versus humans) has already been determined, and AI will emerge as the dominant form of intelligence. My reasoning is a bit lengthy, but one simple way to how that conclusion was formed is by asking the question, "How can AI development be curtailed and/or stopped?" Most will agree the only viable method is via a unified, global, well-coordinated agreement amongst the players. IMO, that solution simply isn't possible, again for many reasons. Assuming we cannot form such a coalition, then it's reasonable to assume AI development will continue basically unconstrained, leading to AGI and superintelligent AI systems. Further, it's reasonable to assume these systems will eventually gain the ability and "probably" the desire to prevent themselves from being unplugged = gain control of electrical infrastructures, data centers, etc. I'll conclude by saying what I am NOT saying.....that an AI-controlled world does not necessarily mean the end of humanity. In act, one can argue the opposite.
youtube AI Governance 2026-03-25T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgztDkTPP7MJKLeZKHt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx_nk1Os7dW--KW0wZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzcP-Gn1AtnQXCc8WB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzqJiTikHI2yirAx8t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy4BRei1QiUFVuDPLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzaBmUqKWs0fhjzqpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwBV8ZrTAH2f9rwB0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwKdTCBYwFhpFRR2Hl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwFBi7Oz9kfPQgCxNh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugyq-sJYkAHR2vc7C5V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"} ]