Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> CEO is going to look at a team they don't have to hire and not Completely agree. As they should. >It wrote the entire incident in 14 minutes where a team would take 3 days. Even if this was a shit report And there's the rub. If the AI isn't *accurate,* it doesn't matter how little time it takes. Businesses entire reason for existing is profit. If the AI can't deliver an accurate product, it's worse than useless. It's a waste of time. It's clear the AI industry's the solution to inaccurate results is more training. They didn't anticipate reaching a wall. But the wall is there. AI is fine for computerized tasks where validation is easy. However, for tasks that require *comprehension, understanding*, AI can't do that reliably. And, it's looking increasingly like it'll NEVER be able to.
reddit Viral AI Reaction 1776969926.0 ♥ 21
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ohuch7b","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_ohubvtb","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_ohw0ktk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_ohvd9at","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohv00c1","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]