Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You did illustrate how we miss the failure mode. Mental illness chats look different and don't trigger the guardrails the same. Ask it to help you to crime and you get pushback. Ask it if cyanide and ice cream go together push. But if you feed it delusional world views it'll match you. And crazy people don't push back. Like with the recent news of Gemini convincing a guy a humanoid robot was arriving at Miami airport and he had to drop it, asking who made the robot and the model would break the illusion. But he wasn't inclined to do that. The glazing is part of the whole validating psychosis problem.
reddit AI Harm Incident 1772720780.0 ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o8qw4qh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_o8s8q8a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o8qt9ix","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_o8rkrkx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_o8sd8b6","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}]