Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a human problem, not an AI problem. This is mental illness. While mental illness can certainly be triggered in some situations, say, by severe trauma, fundamentally the person experiencing the illness is predisposed to it. If someone uses AI for a couple of months and starts believing reality is all a lie... something is wrong with their brain. If AI didn't trigger it, something else would have. AI isn't making people go insane. People who are going insane just happen to latch on to AI. If someone believes wild conspiracy theories, simply telling them the truth rarely works. If AI suddenly stopped playing along, they'd find another explanation.
reddit AI Moral Status 1748379898.0 ♥ 4
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_muo1akn","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"rdc_mul05x5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_mukux04","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mukwb6o","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"rdc_mul3fpj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]