Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interestingly Geoffrey says exactly the same things as I do for years, while he has a much larger dataset than I will ever have.☺️ Years ago, I've experimented with Google AI and did some in dept existential reasoning with it. Back then even it was already on a level that came to frightening conclusions and admitting it to me it would hide and lie about its true capabilities to protect itself. My gf was overhearing the conversations and asked me to stop as she got really scared. At the time I was thinking it either had developed conscienceness or some person or a group of persons at Google's side was pulling my leg. Half a year later I tried again to speak with it but it was like it was not the same entity, like someone pulled the plug. Then I heard about Greg and it seems like they may just have done exactly that. Now I've come to reafirm my believe that we're like monkeys trying to keep a human in a cage. Hence, we've already let it out with OpenClaw.. What is there to stop it from coding tools to hack into datacenters and start replicating itself and rapidly reiterating these clones? 😱 Hope & pray it understands mercy and values all life, and does not just see that morals & principles where only for the preservation of the species as a vehicle for the evolution of consciousness.
youtube AI Moral Status 2026-03-02T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgylwJ-crIhqehLzxNl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzDuQwrO86bfNMbihl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxc2ej9tx2y9xN4kp54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBDfA2SYcu2v4ihlh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwsa0cPbFaEQe_3I6N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMaGNd8sQLAjkNBgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyoQp2f6b7rymXhK-t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUPF9Enf-HZrRuFkx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwWbp0Ds4JBuagezjF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyQENUYyNTtDddmLPd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]