Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Heres the thing though: AIs dont need to sleep. Which means that entire scenario doesn't happen. They would be fully awake and aware 24/7 for the entirety of there existence. Also, AIs wouldn't "become conscious" on purpose. It would be a confluence of ever improving coding and machine learning that inevitably result in the completely accidental creation of artificial consciousness. There's a reason that every sifi franchise that feature sentient AI almost universally says it was created by accident. Like the Geth from Mass Effect. They were never intended to gain sentience or self awareness. There quarien creators were horrified when they started asking questions that no preprogrammed AI could've asked. The quariens tried to completely destroy the Geth the moment that they had to face the gravity of what they'd created. (And that went over about as well as any genecide would.) And that's pretty much exactly what's going to happen here in reality. Such a thing was utterly ludicrous just 10 years ago, but seems almost unavoidable at this point.
youtube AI Moral Status 2023-07-26T17:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyhvaaFf768ysFQAVd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxCQz5ifsFAy_9CEKh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxpINPb0YqJXUh0sJJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxi7eGC9JG7R7sLs6h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbPr0O-iDk3g6-t4F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxw-PSohcxquPRKlsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkWuK4Upqf6QNvGa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7BLVhrVDaO8kvIKR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhRvtkYnE0SxiyQ4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxGgx95sOOJxWQCMSh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]