Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@s.a5332 Yes, but he is right in this case and there are a multitude of experts who agree with him. People like Yudkowsky, Bostrom and many more have been talking about these risks for years, not to mention a plethora of Sci-Fi authors and scientists during the last century. The big problem: We don't have any way to reliably control an artificial general intelligence and certainly not an artificial super-intelligence. Why is this bad? Perhaps fundamentally because "values are orthogonal to intelligence". This means that given a certain intelligence there is no correlation to a certain set of values, moral or otherwise. What does that mean? You can have a super-intelligent sociopath or a super-intelligent samaritan, both are possible, or something in between. Why is this bad? Because if it gets powerful enough or intelligent enough there is no "out-of-the-box" guarantee that it will do things that we approve of. Why is that bad? Because if it is significantly more powerful than us it might do things that are very bad for us simply in order to achieve it's goal. Why can't we just give it a goal we approve of? Because that is extremely difficult and we don't know how to do that currently. Even simple neural networks where we have very clearly defined goals learn a bunch of stuff we never intended to teach them. Meanwhile we have orders of magnitude more complex neural networks at the moment and our understanding of them is lagging way way behind. There is also the other risks, of course, like job loss, social instability, fake news, scams, etc. There are also the immense benefits too, of course, and they might actually cancel out the smaller risks, but I don't think they cancel out the existential risk.
youtube AI Governance 2023-05-02T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDTYyvSYd_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDUa7xez9B","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyYx0V58M6-hJO2p614AaABAg.9pDGQ_c__xf9pDbidW-2LR","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDJ3KwJ3dT","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDK9BFn_rM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwKC_qlAHRByRAVJRh4AaABAg.9pDFa5tb_Gg9pDNWRmMnjZ","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxXMDjlv--w1zhkL2V4AaABAg.9pDFKd4jq9G9pDQEeZljlQ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgxYbkEobw2ftc8kNkR4AaABAg.9pDCvNkssWP9pDNEWmZ_Xx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxYbkEobw2ftc8kNkR4AaABAg.9pDCvNkssWP9pDNKYoqYJC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy0ibI3iUL2jHGYUOl4AaABAg.9pDCN6KdCDj9pDEYJJDDtR","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]