Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the concept is pretty simple, the same as for "self-driving". If you are rarely required to intervene, you will rarely intervene when required. So the technology at a baseline has to be better than a qualified human paying attention, or you will end up with more mistakes as the person responsible for it will inevitably get lazy and stop checking everything
reddit Viral AI Reaction 1776945927.0 ♥ 63
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_ohsp60w","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oht0c5i","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohte27h","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ohtelst","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_ohtgvwi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]