Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The effort to instrument mostly goes into things that are helpful to devs too: * consistency (naming conventions, folder layer, deployment tools) * just like people, every exception needs extra scaffolding in .md files * centralised code * if you have teams with 1000+ old repos split between github, gitlab, some old binaries noone knows where the source is, it sucks * tests, contracts in code * ways to validate changes easily, quickly * cicd, non-prod envs that match prod well, fully realistic test data, scenarios * strong typing ("any" , string is a bad time) * lots of readme and similar files in the codebase explaining everything, in plain english. * coding standards, conventions doc * .. etc etc So yes non-trivial effort, but you need all this stuff anyway. The LLM specific work (AGENTS.md/CLAUDE.md, skills) isn't actually that much on top of this: a well structured set of plain text markdown files. And mostly the LLM can generated those, ie "we just had a chat and it took you 18 commands and 5 minutes to work out how to pull logs from these 5 systems, put together a skill, create a PR".
reddit AI Jobs 1773527732.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oagmn2q","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_oadil27","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_oae1y3t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oah81vl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oafap0s","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]