Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a senior software engineer working for a large company that has declared itself all-in on AI, and has gone to great lengths to make sure we have the latest tools at our fingertips. There are some situations where AI shines. I was thrust into an administrator role for a support application for which I knew little. ChatGPT was a huge help in troubleshooting problems I ran into, at least once I learned to treat its suggestions as ideas to explore. It also is great at explaining concepts to me, if I'm willing to understand those concepts myself so I can discern when it is feeding me stories. Along those same lines, I've learned that unless you can verify what it tells you, it's better not to ask. Just this last week I asked Claude (Copilot) to count the number of files with a certain file extension in my work project. It confidently assured me that there were 90 of them. I got burned after I offered an estimate based on that number, then learned from a manual recount that there are in fact 146. Ouch. When I confronted the AI about the discrepancy, it admitted it didn't actually do what I asked, it just looked at my currently opened files, analyzed references to other files it found, and called that the final number. An "I don't know" would have been much the better answer. ChatGPT also made a dog's dinner of correlating two lists in an Excel spreadsheet, inexplicably dropping about 100 of the 600+ IDs in the second list because I had described them as 4 alphanumeric characters long and it thought some of them were something other than 4 (they weren't, and it wasn't a problem with leading zeroes either). When I asked Claude to refactor one of my code files, it actually did a pretty good job, saving me some time. But it simultaneously deleted two other files for reasons I could not get it to adequately explain, merely apologize for. Fortunately, my trust was pretty thin, so my source control could restore those files easily enough, but I just don't see how I'm supposed to "treat it as a junior engineer" and give it assignments. Fat chance of that.
youtube 2025-11-29T04:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzeRliUtoAEyiFGveF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzLGQc0R9ZdApjB2Gx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwnUCUurzu5biN2mM94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz36WPoaGsmALDHbv94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyIKNvll0bTZxZ0iLp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-zaJ8EmoSYC-1qMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyb07L8aU1TKUvr7Et4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwgGrs5yRK1bVZjw594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxn5XhyLAXP2Bt23Cp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0ivoq2KlBKjKxesJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]