Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I didn't hear one very important bit here: AI "superintelligence" IS going to happen. It's not a Pandora's box that we can just not open, it's something that happens in all futures where humanity doesn't destroy itself earlier. It's just too efficient, too beneficial, to not happen. Also too hard to prevent, given general technical progress. Trying to stop it would be like trying to make humans go back to swinging on trees all day - you'd need a massive system to do that, and that system would involve humans not swinging on trees. Race with China has to be understood in this context. If we ban super AI, China WILL get it and WILL get it before us.
youtube AI Governance 2025-08-27T11:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx9o3HHeecqiJNnQZZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzTFrdabq10x2SiEiR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAB3h3nEQVzLx8TaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzprAF9yV335V4b9xB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxqXbUrTuffWgqFaBB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]