Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Listening to this, I actually hear more common ground than disagreement. The concern for agency, responsibility, embodiment, and meaning is something I share deeply. Where my experience diverges is not in whether humanness matters, but in where the risk actually sits. I don’t experience AI as replacing my thinking or eroding my humanity. For me, it removes friction between thought and expression. My responsibility, judgment, and interpretation remain fully mine. In that sense, the fear of “losing humanness” doesn’t come from the tool itself, but from disengagement — from the body, from awareness, from responsibility. I agree that dependency is a real risk. But dependency doesn’t begin with technology — it begins when people aren’t taught how to inhabit their own experience. Thinking deeply, feeling fully, and staying embodied can be painful. When no one teaches that pain is part of growth, people understandably numb out, outsource agency, or cling to systems that promise relief. From that lens, AI doesn’t create the problem — it reveals it. It exposes how much of modern work and identity has been built around output, repetition, and visible effort rather than insight, presence, or meaning. What feels missing on both sides of this debate is a deeper look at human maturity. Technology doesn’t take agency — immature relationship with technology does. The question may not be whether humans will merge with machines, but whether humans will learn to remain conscious, embodied, and accountable while using tools. Preserving humanness isn’t about rejecting technology. It’s about cultivating awareness, responsibility, and embodiment so no tool — AI included — becomes a substitute for being alive.
youtube AI Governance 2026-01-25T12:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy5ABYyMVzcGuf-n014AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyMueX4N5y7vlRBt1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaAI-yXBa_uR1j8dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyPM9N2SWIbHYXMkUF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzEM8ttDwik6Lbq4el4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxmfD4UX2Tb8j3jFxd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwITNSgxzJ2Os106yF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyTwqGhL9gq_98Ptmp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx1eJ3PFvGRKg951Bl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZN6B1qjAvmgy-5EJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]