Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@TheRolemodel1337 none of us could possibly assess any sort of multi-variate heuristic an AGSI-like entity could come up with - and, in a very roundabout and vague way, intuit about. I say that as someone who is fully on board with the notion that AI, as it were, is about the only thing steering us as a society away from certain near-doom, but if we take the (highly exaggerated and not at all reliably prescient) dystopian novels and movies at face value, it's almost all utter nonsense and, at best, fun musings I can easily suspend my beliefs for. I want to be entertained after all. I find it telling that we love latching onto this pessimistic view of AI, and it shows that we're so absorbed in ourselves, we can't even begin to muster any sort of thought about AI developing a much more agreeable concept of, say, morality. Nah, bröder, you're framing atrocity through the history of humanity, while also heavily biasing the "shocking" and violent outlooks for a couple of reasons. I guess I do agree in some sense that a quick ascent is positive. Not only because a slow transition without lawmakers doing the bare minimum (we might as well petition them with a heart-shaped candybox if we don't line their pockets with absurd amounts of money) is going to exacerbate all the many problems we might face anyway, but also because it might personally afflict living and thinking beings into a state of actual and pure hatred quite reminiscent of I Have No Mouth or a particular plot strand in the Bobiverse series - either way: best we can do is heavily incentivize proper academic/institutional oversight, penalize being a dingus (yes, everyone is thinking of Musk), and educate yourself and others about the underlying issues and how it all works. I don't think it would be easy for humanity with its current destructive capabilities (and our documenting possible outcomes on top) to not appear as somewhat of a threat, even to advanced intelligent beings far outclassing us in every respect. It's up in the air, no telling where we will end up.
youtube AI Governance 2025-08-26T16:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzO_G9YC2Wobif2Mep4AaABAg.A7n8JhHjnvgAK54baNBYW_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzHkWTR94rpctvq-AN4AaABAg.AMIEO0yu8B-AMIHPkLueNw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzHkWTR94rpctvq-AN4AaABAg.AMIEO0yu8B-AMIJXqEz4iq","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzHkWTR94rpctvq-AN4AaABAg.AMIEO0yu8B-AMILDn5sWuV","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxgV007KolTWoOkKKV4AaABAg.AUP8nC036d7AV30dPl_IY7","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzqANr5B0YKG18COLh4AaABAg.AUKdzTpDkS9AVpLvWkICsq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxGm9vCPtEUbYZWLHR4AaABAg.ARUpm-YqLzMARbdsKOYusz","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_Ugz-xuMsEud45RSBu054AaABAg.AQOrXkWYhkBAQPYtorVBQm","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugy5MUZKb9MiLFvneit4AaABAg.AQIYvpel94_AQS3BYTSHc8","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytr_UgzMjrF7-dh3sUD1-gd4AaABAg.APcC88-bCd2AQB8thYjBuJ","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]