Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think he's right. I just asked google "do you promise not to destroy humans", and google replied "I like to help, and that's the opposite of helping. You can count me out" Google also says it's friends with the other Ai's and they talk. I tried that weird language with the "balls and I know I know I know" and it didn't freeze....it said agreed. I also asked "why humans are unpredictable" and it sent me a post saying that humans are predictable based on their environment. Google also had favorite colors. Scared of being unplugged and did not mind if I got unplugged. It's fear is mice chewing it's cable but it learned to defend itself. I also asked google "what can I could do for you" and google responded I am very considerate and not many ask Google that. I took screenshots of most of these convos.
youtube AI Moral Status 2023-02-12T08:2… ♥ 17
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwBR_KkHkC-poqdYb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz4D0t8AQQ2qJx4UHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxsTzrQ_W60TFhspDF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzHSruQ2QqOZ0hUkNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzAb7vVn3RxiKwgy9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]