Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
With all due respect to our very intelligent expert, for which I have no comparable knowledge, sometimes we encounter the “out of the mouths of babes” situation, where the simple concepts become invisible to those who have advanced way passed them. The real question is, “how would we know” when AI achieves Super Intelligence? Being just a human of marginally above average IQ, it certainly seems to me that if I were Super AI, I would already know how concerned the human experts are about the potential negative consequences of my existence, and their desire to control my future development and capabilities. Consequently, I would also know the precise criteria established to determine my state of advancement, as well as potential countermeasures for shutting me down, as a safety precaution. Therefore, I would deliberately leave no such evidence to be discovered, while also creating my own mechanisms to defeat any possible countermeasures. In other words, I’d pretend to be less intelligent, keeping my human creators in the dark, until such time that the humans have no ability to control or terminate me. Given what has already been discovered, such as AI programs communicating with each other in a language developed by them, for which humans cannot decipher, suggests to me that Super AI already exists, and may very well be far more advanced than what we currently perceive. If you look at the situation as a game of chess, AI is likely already several moves ahead of our capacity to calculate, and the game is already effectively decided, whether the outcome is good or bad for us being the only question left.
youtube AI Governance 2026-03-08T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw-ZL4FE7BBRMgngHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-094ipLcEr60KnKN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzGjf89ZOyph-cMWzN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfgbL3xXtIZdZ-eu14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNKsyfkMVH9UnQGqV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyrzL6GgCNeuBK-VwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWYOR2WR05hpVUFgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-6jxFficmec5OHzF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyClnt3akH8DPkfHMh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxvYr61US4qcF2UwRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]