Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So here's kinda the thing I think about with telling si to make humans thrive... Humans thrive when we have an enemy to beat. Oh sure some healthy competition will work too, but not as well as an impending sense that if you don't win you will die. So what's the most effective way for si to push humans to improve? Well, to become a species wide threat just powerful enough for humans to never defeat but weak enough that it seems POSSIBLE and never allowing us to become complacent. This would almost inevitably however lead to having "safe" areas isolated from such a threat where the rich and influential congregate and allow themselves to forget the devastating impact and think of the lesser humans sent out to die as undeserving of the safety they'd obtained. Which leads to lots of bad things for humanity unless the AI can prevent that... But cutting off humanity's leaders would inevitably stifle human progress as well. Even if it say wants to coddle humans and stops short of actually killing us, what if there ARE aliens out there? How will we protect ourselves if they end up hostile? How could it protect us? When it doesn't even know what they're capable of... And if it could, how would it protect us from ourselves becoming reliant on it's power both ethically and in the case of an alien AI being able to prevent it from protecting us? It HAS to encourage conflict to keep us from stagnation. Though I'm sure it could find "safer" ways like realistic VR interfaces and rewards. Unless it just figures out pocket dimensions and stores us in one, but even then, eventually either humanity will want out (manageable) or some outside force will find a way in. It would provide the potential for more time and development before encountering those problems though.
youtube AI Moral Status 2025-10-31T01:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyXqPB4iOnKQDaB7cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzvjGjIcomV9nHpuVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRvoYnXsrwTk1cCgh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDHVjDVe5vFMqacLl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5RRHiINNiLLuE2014AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyA8oelXJATsR8cInh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQvhvf2kLGC8QXy_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6bRlj7LEuSfYtgG94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGbjN8CVd00WSMReB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzWcB36TKcGATM9KC54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]