Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just wanted to say, even with o3, Gemini 2.5, Claude 4, all the new reasoning models, it still sucks. There are a few use cases however. 1) Writing a single function or class or boilerplate that is easy to articulate (essentially, you guide it writing small amounts of code at a time) 2) optimizing runtime of small sections of existing code that is 100% constrained by test(s) for quick verification, and has been profiled for optimization targeting 3) quick proofreading of big code blobs can sometimes illuminate subtle issues that have been looked over 4) general questions about libraries where the source code is public and reachable during search, saving time vs manually going through someone else's code 5) traceability studies where tests are linked semantically with plain language requirements 6) explaining what uncommented / undocumented code does Not good for anything, yet, that requires lengthy explanation (more than two paragraphs), or is somewhat novel or clever.
reddit AI Jobs 1748587401.0 ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mmc6qzt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mv0xbtz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_mlea0da","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mlh2pxe","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mle5den","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})