Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When I hear people talk about recursive self improvement, they always then jump to super intelligence, but I personally think that is misleading. For two reasons. 1) It is very possible that in order to reach ASI, the AI needs more power requirements than actually exist, or could ever exist on Earth. We haven't even got to true AGI yet and already the power grids of most countries can't run all the AIs at max power. 2) There might be (and probably will be) a very strict limit to how smart the AI can make itself and that might not get anywhere close to ASI. For example, ask an AI to make 1 + 1 = 2 more efficient and it won't be able to do it, because that is the limit of its efficiency. I think what is more likely to happen is. Companies like OpenAI and Musk's xAI (and others), will just agree to change what the definition of AGI and ASI are. Then they will tell the public that they reached those goals and big companies will push tons of money into them, thinking that they are really dealing with ASI, when they aren't. Ultimately the biggest threat facing us over AI, isn't computers or tech, it's the human reaction to it. Humans will always be the biggest problem.
youtube AI Governance 2025-08-26T15:1… ♥ 55
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwz6ReKY9mEFJBbE1h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwuSKQu3yNQk7C-cC54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwJRJ2xMK-WREdEEJd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxoaVIak9wSgWMH1hN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzEV7NYE8SIsFGDhu94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]