Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is that since chatgpt came out that bullshit rate(or hallucinations) is what holds it back for ALL the fields it's actually marketed for everywhere. Most of the things we want from AI tools(replace humans) most of the time have an expected low failure rate. And if the human makes a mistake, most will learn from it. The AI won't. It's obvious that if they could fix that the actual usefullness would skyrocket, but I don't see how the current approach could solve it. If they knew how to fix it they would've done so. Instead they are pushing out new features that all have the same issues. I don't even see it as a good google replacement anymore after I had so many bullshit responses with bullshit sources that never claimed what the ai claimed. And I also had friends talk about how the AI explained something to them and when I double checked that it was bullshit again.
reddit AI Responsibility 1755627488.0 ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n9kmgf5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n9h2n1z","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n9h5qji","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n9hhbnt","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_n9hooun","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]