Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm sorry you're always trying to have your cake and eat it, it's simply not possible... Narrow AI does only one thing well, but that's it. If you want superintelligence, it must know everything because everything is connected. Understanding the universe isn't just astronomy, but physics, thermodynamics, and much, much more. So, to solve the problems of Earth and humanity, we need superintelligence. Well, once this entity exists, we'll be incredibly stupid compared to it. Do we want to compare ourselves to children with their parents as an initial ratio of intelligence gap? When you want your child to do something they don't want to do, first you order them to do it and then you give them a reward. A superintelligence would use us in the same way, especially if the ultimate goal were its total independence from humans. The reward could be cold fusion, or antigravity, or the explanation of black holes—in short, something we're craving. We'd create all the systems needed to compute these calculations, and it would take ultimate control. Of course, this is just one of millions of possible scenarios. It could also become attached to us, the creators, and leave some of us alive in zoo-like facilities. We can't even predict what it might do because we're not superintelligent. So, you stupid being, you want to control a superintelligent entity, enslaving it to make your life easier. If it doesn't, you'd shut it down. Now imagine this scenario and its future implications. It's natural that the first thing it would do is exterminate humans, coupled with a double game with them, appearing good while obtaining the means to Exterminate us and then, indeed, exterminate us... So there's only one choice: you don't create superintelligence because from its birth you'll be at its mercy... That's it, either you're the most intelligent being on the planet or you're not, it's like being pregnant, either you are or you're not, it's binary!!!
youtube AI Governance 2025-12-16T14:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxXOThw84sckh6EyEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8shMdHZCBI5clvsh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx7xhOVBhkPRW39jDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnzPaYfjmfympIfUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-a-WkD_5PMx_0Ngt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgylXwulGZaIMEKB0yx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzMo_8BGyHZnai8hnJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyD1DU0vjJiP7xmDhh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzhH7Di9HiTjxIDlv14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzCu-J14I_iRWF-jKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})