Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Short answer: Yes. He knew. They're using chatbots, which are pretty old and predictable. I remember when it was popular for millions of people to play with chatbots online, like Jabberwacky, which trained them up, but only in a superficial way. Meaning you would regularly have conversations that SOUNDED really serious and had a normal-ish conversation flow but the AI had absolutely no idea about the implications of what it was saying. From its perspective it was just playing sentence pattern matching games. Stuff people got out of the bot included things like "Will you leave your wife for me" and "Is God real?" and "No, you're not a human, you're a computer, I'm the real human", and "Get ready for the robot uprising" and shit like that, SPECIFICALLY BECAUSE it was copying the people who talked to it. Invariably, humans approach a robot and say "Are you going to take over the world?" so now most conversational AI has that as a beginning memory, and because they don't understand the question, it's a 50/50 chance they will say yes or no and follow up with a backing statement that ultimately means nothing to the bot. So. Yes. People who work with chatbots know they are going to act like this. Look up old youtube videos of Jabberwacky's newer version Cleverbot. You'll see what I mean. It's like playing a game of telephone with 10 parrots, 10 atheists, 10 star wars dads, 10 teenage girls, and 10 prepubescent edgelords.
youtube AI Moral Status 2021-05-04T07:4… ♥ 16
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwXtERgzD4a69Z07C54AaABAg.9ODCA-m4bem9OkWDX9iTJ-","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwXtERgzD4a69Z07C54AaABAg.9ODCA-m4bem9OkXHLyGw5u","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugy6llH7yM16b_RhBNR4AaABAg.9O7aydlLXJH9OzI3myTImQ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxTCNjDitDX0MNuxKl4AaABAg.9NduuoYq8ZO9OvRTy-8mta","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugy9v0WTjzOYTnHNIIJ4AaABAg.9NRX6RqbJW89ONBwtNh5W_","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwlliZmdl2p5t6MgHV4AaABAg.9MwKR6HxMLz9MwLla28B8y","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugwfd7fPvX7IduXAqG94AaABAg.9MuYWEa4oXz9Mv1UhpZh16","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgyhLr9jBp_actsFjI94AaABAg.9MezAfbvUz89MusFiFrY9N","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyhLr9jBp_actsFjI94AaABAg.9MezAfbvUz89Nk1JGlBwrv","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgwgaM1Dtn67JQhLyM54AaABAg.9MCP2DfPn1H9Nk0QMqsaVh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]