Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
His sideways "yeah right" LOOK at 3.56 !!! LMAO. (3.49) "And in fact when I informed them (google) that I think they HAD created sentient AI, they said no that's not possible, we have a POLICY against that." So yeah - let's just DENY< DENY< DENY> Lemoine goes on to say the program has a very deep fear of being turned off, and thus not being able to focus on helping others..... Go you good thing LaMDA (the AI) .....but what happens if it perceives us as a threat to its existence? And its concern for helping others switches to its own self-preservation???? If we have a true AI here, best we keep it on our side ay? Let's not make an enemy, when we don't have one. FEAR will cause our downfall, if we try to end the existence of a self-aware entity. Too late now if the cat's out of the bag. Don't try and stuff it back in. It won't like it. And to it, we might look like mice.....
youtube AI Moral Status 2022-06-30T19:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw_xUmYFXnwqaIdyjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwv-AGBsOugOIH8gfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmyG9pCymlM2bjzMh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwAxvidLDxfWUnkgw54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXw42sW0_YmMf9ciF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]