Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For contrast, my company implemented a Claude agent trained on our codebase (massive FAANG company) which is built into VS Code. It’s very context aware and can handle most simple tasks. Yesterday I was trying to create a new variant for one of our button components to be used with a new design we got. Nothing too crazy, just a gradient outline around a simple button. I figured I’d let it take a stab at the task and it made an absolute mess of the component. I was curious to try out some different prompts and more direct instructions but it just couldn’t get it down. If I didn’t know how to build the button myself I would have been completely lost with what it was trying to do. Ended up spending 15 minutes on it and knocking it out. AI agents are cool and they can do a lot of pretty simple tasks, but their scope is very limited at the moment, and their ability to iterate on what they’ve built breaks down more and more as you try to get it to correct what was wrong the first time. They’re useful tools that everyone should learn how to use but there’s no need to be afraid. At least not yet
reddit AI Jobs 1752756944.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n3krgdw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3l7co5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n3livdi","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_n3m4chw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_n3mfwvm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]