Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is true, LLMs cannot keep track of the entire application. There have been massive improvements though. Within a single file, Gemini Pro 2.5 is able to handle scripts with hundreds of lines, make incremental changes and not regress everything each time. That was a big issue with early LLMs. When LLMs have to go through multiple files, it can't do the job. The modern junior dev is there to debug and give the correct prompt to the LLM to fix the code. And that also can fall short if the change is tighly coupled to the project and is not universal knowledge.
reddit AI Responsibility 1756558395.0 ♥ 48
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nbmjda0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nbh9pmw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nbibwas","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nbj0tz4","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"rdc_nbhfoff","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]