Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
2 questions: 1) what if superintelligence, being superintelligent, simply won’t be interested in developing humanoids and doing most of the works, both mental and physical, actually made by humans? We’re trying to predict Ai’s behaviour from a human perspective, transferring to it our personal aims and desires. It could simply be indifferent in doing a lot of ‘humans’ stuff’.. richness, power and dominance included. Which advantage will it get by pervasively replacing humans ? Probably, the most intelligent thing anything/anybody can do is doing nothing. And maybe superintelligence knows it. Who knows. 2) assuming that superintelligence will be interested in creating a large population of humanoids to replace us in every aspect of our routine.. they have to be built.. how much material, in terms of minerals, particularly rare elements.., are really available? Probably it will find a solution to overcome the limitedness of resources actually used.. by developing new exploration/exploitation tecnologies and synthetic materials.. but it could be a bottleneck for humanoids’ population growth… i don’t really know… Of course, these are only ideas.. but I’ve noticed you didn’t have covered them. Obviously, there’s so much going on that can go wrong leading to some kind of catastrophic event that is definitely better being prepared to some of the worst possible outcomes. Regarding the Simulation Theory.. it’s speculation. There’s no scientific evidence of it.. and talking about probabilities is misleading, as if some serious calculations have been done. Instead, there are academic papers in which rigorous mathematical computations confute the theory in terms of energy required.. in any case, supposing they’re wrong, the theory simply transfers to another level the concepts related to ‘real existence’.. how is the real word made? Are we living in a matrioska of simulations?.. should we really believe that someone is so bored in the real world to create a virtual one just for fun and entertainment? If this ‘someone’ is not a human, a different type of entity may not perceive boredom, since it’s a human feeling… again, we’re transferring our personal characteristics to someone/something else with probably too much confidence.
youtube AI Governance 2026-02-18T01:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzTvtOvW3dgBLwt-JF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugya8-fKUxIFfM7dJJt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwA2vm1Lw2mvPvh9hd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxo4aXxqcbLVT96hVt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPJaptlaqvOyWawAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxcPR8ZDFoMcUL08f54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwzqKlpPKwx_fA89ER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxHZKyx-y7HlitgyWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnq781aPnxaEktw654AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwtgTw__aOIRAwkm4h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]