Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems that Gödel's theorm effectively proves that alignment must fail once super-intelligent AI reaches questions that we are intrinsically incapable of providing axioms for -- like a young child incapable of a certain level of detail on what they want. From here, if not set to crash, AI must explore and interpret on its own. Relative to what are wishes would actually be, it will rapidly commence a "drunkard's walk" statistical meandering away from that -- a square root of an exponential function in time, that is, still an exponential. Factors that we don't know yet may slow this down or stop it -- but even so, would these happen in time to save us? As in a sense already pointed out by Dr. Yampolskiy, we are looking to solve an NP-complete problem in linear time. As the exponential rises above the linear projection, we are lost. The only hope is the addressing of human ego, at least through clarity of self-preservation.
youtube AI Governance 2025-09-10T06:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz63OfxBliCfCNCtX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-Uj7gmJ_G_un2QcV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzT5BFX_ObMWcGoOwZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjuBVIIZ2zzrDFAuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwt6pUfvgUUpyJ4-CZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzEGJj-M4hxda4eDU54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzL7gMRgA9taffEUel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwVG-0gs6dgk4hvYfp4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxHYi7aBP5rRdbBQbx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy62p-AoL1jjucJe7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]