Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People aren’t “failing” because they use easier tools like AI. They are responding exactly as humans are designed to. Our brains are built to conserve energy, minimize effort, and choose the most efficient path available. That’s not a flaw, it’s a basic cognitive principle. Expecting individuals to consistently ignore an easier option requires constant mental resistance, which itself consumes attention and energy in the background. Once a more efficient alternative exists, using it becomes the rational choice, not a moral one. There’s also a social dimension. People don’t act in isolation. They operate within an equilibrium where they must keep up with others. If everyone else is using a tool to be faster or more productive, refusing to use it can put you at a disadvantage. In that context, adopting the tool is adaptation. Blaming individuals for this is harmful because it misidentifies the problem. It turns a predictable human response into a personal flaw. The real issue lies in how systems are designed and what they reward. If the environment incentivizes speed over understanding, people will optimize for speed. That’s not pathology. Pathologizing this behavior suggests something is wrong with people, when in reality they are behaving rationally within the conditions they’re given. If we want different outcomes, we need to change the structure of the environment, not moralize individual behavior.
youtube 2026-04-20T21:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxnwO80BlixbAuuLW14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJoa4UtJYHCCnCPFB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFAYaDVgL4s3pfh9x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoAioSAizp73qDwMZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyoNIzctR84K41voSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5fd2ql2b2pv_K0gt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyx48uozsghh9j_lTZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw74bok4pZIFRBAMXF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzYgpNP1HOI2RsQXhB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwXAY-jfdSrwcwGnX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]