Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Work has been pushing AI, we just got a copilot subscription, and I hate it. I've never used genAI, and I never will, but I don't know if i'll be able to escape the grasp of the AI "productivity" tools for much longer. I have friends who've used genAI to make art for small things, which I don't *like* but I think is a totally fair use case for AI. You can train something like that yourself, on your own dime. It's not gonna cause nearly as much damage as the big corpo "just use AI for everything" mindset. It is scary, though. A lot of people are being directly threatened by AI. My job is safe, because I work in a financial field (nothing fancy, i'm not making big bucks or anything) so pretty much everything we work with is very private data we cannot legally allow to be seen by almost anyone not on our team or at the client company. So training any theoretical AI has been in a constant state of "we're trying to figure out how do this legally" for like, two or three years, so I don't expect that to get anywhere. But for others... it's like a gun is being held to their head, saying "you can use this AI to ruin your cognitive capacity or you can keep your job" even if it's not actually a direct threat, it's going to be looming over them, threatening their mental state. It's a bit disconnected for me because, as I said, I'm not under direct threat, but if I take what I actually went to school for (comp sci) I can easily see the problem. Either you use AI to write probably functional code more quickly than you can, at the cost of nobody having actual understanding of the code base (which can be a huge problem) or you have to spend a lot of time doing it yourself. In more competitive jobs, that could cost you your job. Fast results vs good results. That's a lot of stress to be under when there's this AI tool right there, tempting you to "just ask chatgpt to do it, save yourself the time and effort, what's the worst that could happen?" It's an easy out, and I despise it. Not even in a "hard work is the only way to do it" sense, programmers are lazy, we copy code from stack overflow like God intended, and spend hours or days to automate something that saves us like two minutes, but that's because we actually (should) understand how things work. It's a complex beast. I'll also note I've heard that AI is good at writing functional code, but I've also heard chatter saying that its code quality is absolute trash. I haven't looked at it myself so I can't really speak to that, but if true that's also a pretty serious problem. "What's the problem with bad code?" Nothing. Until someone else has to update it, or it causes a massive problem (hello, microsoft, thanks for being topical, amazing update you just put out after claiming a good chunk of your code is written by AI) and because of a lack of a human element, oops, nobody knows what's going on with it without expending a lot more effort. It's just a matter of time until someone gets lazy and tells an AI to write code for something critical and just absolutely makes a mess of it. I don't think skynet will be malicious. I think it will be impassive, and that might just be worse.
youtube Viral AI Reaction 2025-09-05T14:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxMK94tKyi9h_nbXCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyZ0hGTaqvyTCGMbNR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzV95DvNh2L3UjznrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNfw15DdPleQMZVU94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzuDUpbo3Uf9egpMbB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwG4gzfOoU_bbFvEPN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzDuLl52K4kY2G-JIp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRu0oiBrd9rOzOWFN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxY3-KCvx7Q_TsLDNd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypAWlQknAtHN9xwy54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]