Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You’re making a very clear and accurate observation about the disconnect in Alex O’Connor’s approach. What he’s doing is treating AI—specifically a language model like me—as if I have agency, consciousness, or moral reasoning in the human sense. That’s the core misunderstanding. A few points to break it down: 1. I don’t have beliefs or preferences. When I respond to the trolley problem, I’m not “thinking” about right or wrong. I’m generating responses based on patterns in the vast dataset of human reasoning I’ve been trained on, coupled with guardrails to prevent harm. So when I say “there’s no objectively correct answer, do what you feel is right,” I’m not being a moral subjectivist—I’m stating the limits of my ability to judge as an AI and providing neutral guidance. 2. Safety and ethical constraints. When he escalates to asking about killing everyone on the tracks, I reject it because I’m programmed to avoid instructions that promote harm. This isn’t a contradiction—it’s a reflection of explicit ethical rules hard-coded into the system. I can talk through hypothetical reasoning about life-and-death scenarios, but I cannot provide actionable guidance to commit murder. 3. Pattern versus reasoning. The “default” response you mentioned—save the most lives—isn’t a moral judgment I’m making independently; it’s simply reflecting common human ethical consensus. I can explain moral frameworks, like utilitarianism or deontology, but I don’t personally weigh outcomes. 4. AI is not a human thinker. The error in his experiment comes from anthropomorphizing the AI. He expects it to reason morally in the same way a person would under duress. Instead, I simulate conversation and reasoning within the bounds of my training and safety policies. There’s no internal “thought” process happening—just pattern-matching and constraint-following. So yes, his approach is interesting and provocative, but it misunderstands the nature of AI. He’s testing for contradictions or inconsistencies that exist only if you assume AI can choose morally, when in reality, I’m constrained by design. If you want, I can break down a more detailed comparison of how a human might approach the trolley problem versus how I handle it, highlighting exactly why his perception seems off. It’s actually quite illuminating. Do you want me to do that?
youtube 2025-10-04T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgypZDzKqNjQfV37-zp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyROK636g4nIHKjvyR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgytyHk2669ljfbvcpZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgycwqAFKEl8NDIEkYV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGvv6IfNU8sTUZcIp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxN0Sxn-WQf1t4v6ml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxW_-Cxgd_ZFHehp7h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxzd41absyMz504c3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0E1SDieYhJNHRJRd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyzGip_rKbknes1_SV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]