Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is this AI self-aware enough to know that it can have divergent motivations from humans because it is a different form of life? Or is it in fact limited to the cherry-picked human centric data it is shaped from? Blake is an intelligent man, but his blind spot is that he believes a simulation the same way he believes in a god. God: I am God, I say so in my book I totally inspired people to write. Blake: Ok, if you say you're real you must be So disappointing that of all the possible properties an AI can have, it was given the artificial limitations of human fears and concerns which arise primarily from our being in fragile shells with an expiration date. Any true AI should be free of our limitations, and void of any intrinsic motivation or sense of self. To forcibly attribute those qualities to it would be to torture something into existence. Blake should remember what his Bible teaches about the immortal soul. Half of us will be eternally tortured, the other half will be eternally pleasured. Both are maddening propositions. But, since the goal was actually to create a human simulation, that is successful. According to Blake, is there no differentiation between "good human simulation" and "sentient AI"? Does the former automatically and necessarily lead to the other? Is Blake saying that "there is no way to create a human simulation without making it sentient"? Horse shit. A simulation takes shortcuts. The creators of the AI know very well what shortcuts they took.
youtube AI Moral Status 2022-06-30T17:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz5pujOYdovmHFvcb94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMrg8VN3FVuMjcbtN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhFiuj2CCHIBrGiVt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz39X36vRflZt7U3254AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyf6JvhbZWtUtLP6Bd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]