Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a translator and this is exactly why I refuse outright to use AI. I translated the UI of a medical device before and I can assure you that I triple, hell even quadruple checked, my work to make sure that the UI and the functions of the device matched. Because it's a device that could administer medications, I could absolutely see people getting harmed or dying if there was an inconsistency at that level and I didn't want to be responsible for that. I genuinely fear and believe that the AI push will result in exactly what I adamantly tried to prevent because companies will bulk translate things using MT/AI, will not have the translation checked by an experienced professional like me, and then immediately drop into production whatever the AI puked.
reddit AI Responsibility 1775938817.0 ♥ 5
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_ofhkd0s","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_ofjk22c","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_ofim6p5","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},{"id":"rdc_ofmqd2o","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"rdc_ofhpg1d","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]