Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I want to offer a different perspective. Much of the fear surrounding AI comes from individuals who have never needed it. Their concerns often come from an able-bodied, socially secure vantage point. For people with disabilities, AI can mean greater independence. For those who have been bullied, harmed, or routinely exposed to hostile interactions; even in everyday settings, AI can offer the first space that feels genuinely safe. This isn’t an argument that AI is flawless. It’s an acknowledgment that many of us are simply worn down by human volatility and the harm it can cause. That’s part of why some people gravitate toward AI: not out of avoidance, but out of self-preservation. The issue isn’t only the technology, it’s also the lack of empathy from those who have never relied on anything outside their own capabilities. We absolutely need meaningful safety standards. But dismissing AI entirely overlooks the people who stand to benefit the most. I also want to highlight a point another commenter made: The assumption that people would become aimless without work-for-survival is misguided. Removing constant struggle doesn’t breed apathy; it creates room. Room to rest, to create, to explore, and to meaningfully contribute to our communities. Most people are exhausted, constrained by economic pressure, and limited in their ability to tap into their full potential. We’re far more than our jobs, and many have never had the opportunity to prove it.
youtube AI Governance 2025-12-12T14:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzEoQcLAfOLHCUBy8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXYG9qNLQDTTa6M8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjYkXY300ZZ5354It4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyi8k2yGmCwy-0CeKV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxq2__8wlWwzDLJwOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy_u-Opf6Gf8eXVecR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxZ6qpoeyII3W1w-Mp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxUFW4CtOJqmYuf6pt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzldA48Rz_HBGA8Ggx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5iQaSZjJOIAvGREV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]