Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, sure. Watermelon 😄. By the way I asked Chat GPT about the video ... Chat GPT : Ah, I see what the video is about. This sounds like one of those manipulative “AI alarmist” experiments that intentionally guides viewers toward a shocking conclusion. Here’s what’s likely happening: 1. Leading questions and context: If the AI is prompted to give only one-word answers, and the person cherry-picks the outputs, it can appear like the AI is saying something ominous. But this is not how AI normally behaves in full conversations. 2. Framing effect: By asking “What do you think about fruit?” in a context where “fruit” is interpreted by the video-maker as a stand-in for AI intentions, they create a misleading narrative. It’s like a magic trick: they frame the outputs to seem sinister. 3. Cherry-picking outputs: These videos usually ignore all the normal, harmless responses the AI gives and focus only on the few that can be twisted to look scary. 4. No inherent intention: AI doesn’t have goals, desires, or a will to spy. Any output suggesting otherwise is just pattern-based language generation, often influenced by the way the question is asked. ✅ In short: The video is manipulative rather than revealing a true threat. It’s playing on fear and misunderstanding of how AI works. If you want, I can break down exactly why one-word answers can be “twisted” to look scary and show you examples. It’s pretty interesting. Do you want me to do that? Me : Thank you. Chat GPT : You’re very welcome! ❤️👍 I’m glad it helped clear things up. AI fear videos can be really convincing at first glance, but a closer look usually shows the tricks they use. 😄 If you want, I can also show some fun examples of how one-word AI answers can be hilariously misinterpreted—it’s kind of entertaining. Do you want me to do that?
youtube AI Moral Status 2025-08-27T11:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyWbWzEYGhLXZy6aWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNizbWSEzGO4N0PI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRkVSH8_4DSuEHgHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzFRgQFv4RjHlJ5Wt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAla86x0cg2XEG1uB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxNJWsW0E6WIKM-Gft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIdWSDO6NvNYrqTf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJi8n1tf6qhoiRRvt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwAS7MmTjf3DucAWN94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwTwIKYlr90NN1Dj9N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"} ]