Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Because eventually having to feed/raise humans and make sure their feelings don't get hurt will prove to be an obstruction to the AIs' goals. And when I say "AIs' goals" I mean we will have no idea what those will be, because they will have not just super-intelligence, but non-human intelligence. AIs don't grow up as vulnerable children, don't get sick, don't have bodies, aren't worried about mortality, don't live in a social animals, so a general artificial intelligence will think in ways remarkably different from our own. I think back to the example that some corporation tells their AI to build widgets, so the AI builds widgets, and does everything it can to build more widgets: Optimize factories and supply chains, take over other corporations, take over countries, war against other countries to acquire their resources, destroy the environment, wipe out humans. All so it can build more widgets that nobody is ever going to use, but because someone instructed it to build widgets better and faster than the last quarter.
youtube AI Moral Status 2025-04-30T04:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxK948RA3Ipw-k4V5t4AaABAg.AHVzraGN9PCAJeTb-FO96T","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxK948RA3Ipw-k4V5t4AaABAg.AHVzraGN9PCAM5nYi5Stp5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy3WjK_BGWkI3s7QLt4AaABAg.AHUa0TXj_eEAHX4AnMHpnF","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy3WjK_BGWkI3s7QLt4AaABAg.AHUa0TXj_eEAHZV9ihifbI","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy3WjK_BGWkI3s7QLt4AaABAg.AHUa0TXj_eEAIr9ZGEpOaX","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_Ugy3WjK_BGWkI3s7QLt4AaABAg.AHUa0TXj_eEAIzO_asTbjB","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugybc7Rz_O0CSVB5tet4AaABAg.AHULc3hhenhAHXBqODNKae","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugybc7Rz_O0CSVB5tet4AaABAg.AHULc3hhenhAHZTYQVm505","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugybc7Rz_O0CSVB5tet4AaABAg.AHULc3hhenhAHZ_6-Qy1iL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugykr_8sQIdvTr072R54AaABAg.AHU1RYoS62zAHUR2hNeACq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]