Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is people are like wow, 80-95% accurate?? That's really good! Humans get stuff wrong all the time too, so it's probably better! The real issue though is humans generally make rational or predictable errors that you can work with or around or plan for. The 20-5% of the errors AI makes are just full blown hallucinations. They could be anything. You can't work around it.
reddit AI Responsibility 1755626863.0 ♥ 12
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]