Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Agree with the danger of super intelligence and a lot of jobs in future being performed by Ai. Don’t agree with 99% of jobs. And don’t agree with his timescale. It will happen much slower as it takes a lot of time to ingrate AI into every aspect of our lives. Specially in low tech sectors. We are just slow at adapting new tech on big scale. That’s why despite definite danger of climate change we still depend on fossil fuels and not implementing green and renewable energy as much as we should. Same with all other available tech like self driving cars, etc. just because better tech exists doesn’t mean we are going to utilise it everywhere immediately. As for simulation - we definitely don’t live in simulation. I know it and I can provide numerous arguments but this will make my post into ten pages doc. His assumptions are wrong. And hence his conclusions are wrong. The whole concept of how a life-like simulation would function (if it’s possible) is wrong. It would be nothing like this reality we live in. Generally, I think he has spent so much time working in AI safety that is biased (probably also because of confirmation bias). It’s like living in a bubble where you only feed yourself information that confirms your beliefs because that’s your job you are paid for (about Ai) and all other beliefs (simulation, etc). So you lose sense of reality a bit. You lose balanced and realistic perspective and critical judgment of your beliefs. And there are a lot of goods that are limited like property in big cities where cities are geographically bordered by other towns so can’t grow and there are laws restricting building high towers, etc (eg). And natural precious stones and metals are limited. So in reality there are a lot of tangible real goods that are limited. Crypto is just artificially limited. But crypto is more affordable to invest in compared to for example property.
youtube AI Governance 2026-02-12T20:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQDwBo1lDE75gIyK14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxijyw2JTSmgFlUD114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwVVn5I-BfeXY47f6V4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzsAHRfVavkdujhzeV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxYGu41aUmCo62uTJx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAdFQp3fkXaPKoSgp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQPOE1FaNy5O0Wlct4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyjiJxO82MINJJ7F894AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz3bsmp2NCLLSz2yqB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzfxCQvUWZ5U7VINDx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]