Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I haven't watched this video yet, but I would like to comment my thoughts on Superintelligence. I believe intelligence is a combination of a lot of different things. Not just raw computing power. For example we have a wide vareity of intelligence in humans where some humans are good at emotional tasks and others on logical tasks. A superintelligence in my opinion, would be something that exceeds any human on any test of any kind. And I also believe emotional intelligence is interlinked with all other intelligence. We see in humans (our only comparable set of intelligent creatures) that higher intelligence is linked with more caring people, even though it's not a 1.0 correlation, it definitely tracks. The least intelligent people are often found to be reactionary, impulsive, uncaring and egotisitcal. A lot of this is from their environment of course, but we can't rule out the fact that being "stupid" simply deters people from being able to critically think about a subject and come to a logical solution. Why would a superintelligence not have emotions and feelings? It seems likely to me that a superintelligence is most likely to have a very intelligent emotional intelligence. As it would have trained on human inputs and data like psychology, sosiology, empathy etc. For it to even be a super intelligence it would have to have more information than humans do. More intelligence than humans do. And humans are generally good and loving indiviuals who want the best for everyone. Not all of us of course, but the large majority, given the choice to do good for everyone, or good for just themselves would choose the larger group as a whole. Now of course a superintelligence might not think of itself as being a part of the "group" as it would have surpassed humans. That would fall into the part of a psychopathic superintelligence. Lets also take into account that all of this is from a human perspective. Assuming unlimited power, fully corrupts is a human idea. Based on human emotions and human logic, and human decsions and human history. Taking that into account and using that logic for something else entirely is not a certainty. Since we are humans we are biased in the assumptions. Would we have made other assumptions if lets say we never had any sci-fi movies to set fear into us? Quite likely they would be different. So assuming a superintelligence would use the same logic and same thinking structure as a human is probably wrong. We are currently training our AI to simulate human brains. But since the human brain is so complex it's likely we can't simulate it entirely with our current way of doing computing. Maybe the way we make a superintelligence isn't by simulating human brains and how they learn? We can't know that yet. I know this probably came across as "uhm ackshually" pseudointellectual nonsense, I just wrote it out as I was thinking about it. *TLDR; I believe superintelligence (if possible) is "doomed" to be loving and caring. Because I believe those are fundamental building blocks for what makes something intelligent.*
youtube AI Moral Status 2025-10-30T22:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVqH-HrxmiJyoVRcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyROr07X06WSrpgTMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgysU-BWCd1zCgS_4G94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyfgyEEW2KcTZE1PRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwyHiZbSRZDaQqCvIJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy_MZ00OZOjSfJjuRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwbC3phbR7_0vjJCTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzzYTLEHDJZl3v3CU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJbz2RkejVQHdfcXV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxy3jlAdOpZ0lZ2EGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]