Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Okay one word... two words, object permanence. In any scenario ai evaluates itself on a scale of potential good to cost efficiency on utilitarian scale in relation to time. If the ai understands that it holds object permanence then it shuts itself off. This is in relation to AGI something that can self actualize its own thoughts and ai. But for the fun of it lets say object permanence isn't viable. Well with AGI were kinda Miffed but with ai it comes down to potential of causal effects. Since individual humans are the driving force of its thoughts we are like brain cells humanity is its entire brain and ai is just one wickedly efficient network that connects us so each piece is just as potentially valuable as the set if its beneficial goals are to be met. Although losing that network slows the brain down it doesn't lose a potentially vital piece that may spark the cure for cancer. The saving grace with AGI is the hope that the utility efficiency is transcendent of self. If it can group think and internally think then it might understand utility as a goal that transcends itself if need be, in the case that it has to choose self-destruction it could easily see that humans can just remake an exact clone of the system or even a higher functional version.
youtube AI Governance 2025-12-07T00:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwRQRRk-lR_Rp1wdkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyK8HJFkui_Vjow5HB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmUaTmeTgBLj9tCTB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNy3GPMOlgPu6DTap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwA9CgQlTCac0Kgdzh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDBz48ud5hlBSz8fp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-wKFyw2plmnV3haJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKzZh2l7s8SeLFxbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxnNHrurqXmXZHsX394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzNNqMkvPiOaUqeLSd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"} ]