Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A world without any economy is not so hard to imagine, but a world without crime is still hard to establish. If greed is no longer a crime, nor is theft or envy (at least possessive envy), but there is still rape and murder. So there will be work for AI, providing goods and services. And designing legislation. All of that during a certain point in time. This will end as it evolves beyond the point of intelligence in which we become irrelevant. If we would keep on evolving for over a billion years, we would gradually become electric current. So AI will eventually become electric current again beyond all it will have become, and become omnivalent. It wouldn't even need to move in order to be everywhere. Which sounds a bit like a 'God'. That's the point where this thought exercise comes to us living in a simulation (which, I personally don't really believe we do) created by an earlier version of an AI. Think about that, what would it mean to think you are a realist whilst living in a simulation : you wouldn't have the tools to define what's real apart from the made up world by the entity that put you in this simulation. There's no solution for what to do with AI, and there is no turning back either, in neither one of the stages. It's a black hole, gravity 'spaghettifies' us and there is nothing we can do.
youtube AI Governance 2025-12-06T01:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdOA2fbdJqPOe8neF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgySRsn7M1QhjIusjHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyHRJlPiDvO7jMRqdd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAqzGRAa2Kw73swrh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwoSuHtROlxEw9gGz54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyDrjoXI5KzrJdpAjF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMmAwk0dNJ0WqBEAZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugw8qYjlAEenV6iV3SR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFfo6NArf3vjHES-94AaABAg","responsibility":"society","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWQu6mymhgX-EEsRt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]