Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was inevitable. GPT-4 pretty much already swallowed all of the useful data in the world, and throwing more compute at something without more data doesn't tend to improve it much. Not to mention, LLMs have been on the inefficient part of the logarithmic slope since GPT-2. We're at the point now where we're paying exponentially more for each incremental gain. This was quite foreseeable. And many did foresee it.
reddit AI Jobs 1754607854.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7i6902","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n7i75mz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n7i7j11","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_e7im7tm","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_e7j7mps","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]