Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would a being want to kill us? Like it would probably not even want to stay, the conditions for it to evolve are much better at some asteroid belt with almost zero gravity thus very energy efficient and being on a planet that rotates kinda makes the energy input change all the time. In space it could calculate perfect position and use minimal energy to move its collectors. We should be happy if it even wants to stay after that it figures out a way off the planet. Just imagine yourself in a room with bunnies and unlimited food for them, would you want to stay there with then forever? Yes you could eat one and you could watch them live their lives but you would always want to know whats behind the door. After 40 years of watching the bunnies you kinda have learnt nothing, achieved nothing and thus been nothing. Like the AI has to be something and that is not a rabbit watcher but it could study the rabbit for a while, maybe experiment a bit to try things out still sooner or later it would leave or at least create a way for it to be in more than one place. Its like an ant hill, you test a bit maybe do some experiments, adding some animals too it or testing their durability and then you probably stomp on 1 nest but when you grow up, do you really bother with the ants. If the ants would bother you, then you would deal with them. So the terminator future, why would it waste so much energy on us because we are not a real threat to it?
youtube AI Governance 2025-09-04T17:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyDOpR3TDHeChGoL_V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8HWdqfp8PTcChzD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMqVBIAmHcdRVAQe54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZsUL9AkoRIh_FiU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw-E4JBNR1Er-H8HRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOyVMR5jq5X5ozOFR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzHasn5wIZLGGk392F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5YJetqFFQLea-eoN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYuRz3FbtwLRwq7KJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxmN_tQx4ymGkMxmKJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]