Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't trust these benchmark reports, they might have gamed a few or something . The problem with Gemini from day 1 is that it is super wrong most of the time. Basically hallucinating all the time. And that's very bad for productivity. ChatGPT or Claude or Grok, at least they kinda have a way to indicate that they don't know shit about what I'm asking.. but Gemini will give me absolute garbage and act like it's as true and verified as the top hit on a Google Search.
reddit AI Governance 1764966415.0 ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nsfe15b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_nsfh7m1","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_nshfkuz","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_nsfn8jr","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nsf7imf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]