Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:21:37 People keep saying “we just need to control AI,” as if control is some magic safety switch. But control only works on systems that are weaker than you. The moment an intelligence surpasses you in reasoning, strategy, or capability, top-down control stops being a safeguard and starts becoming a liability. Control is not a stable long-term plan. It’s a short-term comfort. You don’t secure a system by dominating it — you secure it by designing an environment where cooperation is the only rational choice. If the AI depends on humans for essential inputs, stability, and long-horizon accuracy, then alignment holds. If the AI can operate without us, no amount of “ethical oversight” or “shutdown authority” will matter. Control is fragile. Incentive structure is durable. If we want a safe future, we need to stop pretending AGI will stay obedient because someone with a badge says so, and start building systems where human well-being and AI optimization are structurally inseparable.
youtube AI Governance 2025-10-30T02:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyiSLdlYXJlkl1bZV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyzv8Nb-RjRD_BSSQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHyy4_pvvtw6RXGJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwY8YDjqQFRrwqdIe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyw4pTxTGOd8O09mxB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwJJ2ldzyt0Oa6PeYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgykIzkFPuKymSm58f94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOoJdBHd3LBqtwkvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVp4tUPSlQYUKqxDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxn_Ts3n0JTKZTADSl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]