Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think it's a bit of a leap to assume AGI/ASI could seize the means of production, level cities and build out infrastructure for itself. It's blindingly obvious that AI is amazing at the things us turbo-charged apes find difficult (thinking, hard sums, etc.) but actually rather bad at interacting with physical reality on a sensory and manipulative level, which are things that all naturally evolved life does innately. We always thought the physical stuff would be easy and the mental stuff would be hard, because that's the way we experience reality, but in actual fact when it comes to AI it's the other way round. In effect I think this means we would - for a period at least - be able to pull the plug.
youtube AI Governance 2025-08-27T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw-Le6z9gL_FwSFWLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxv1X3uA_zxENTsMM54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7OgiEPOcVj9W6XkJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3ml4iPHXlkcop-N14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNifbE4iaRsvP4RkB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]