Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So just like everything else humans make worse in the end by trting to make it b…
ytc_UgwR37hea…
G
Thank you for commenting, @SethKing213-56! Ai is wild indeed, it's like having a…
ytr_UgyiGSbWf…
G
im starting to as well. i have had conversations with other language models, and…
ytr_UgzA4RKCW…
G
My personal opinion it won’t. As long as it stays expensive. Cheap labor will al…
rdc_kz0t9il
G
I have tested the limits of the AI camera :D & gladly never got caught. Come ove…
ytc_UgwuxL8EY…
G
there's a scifi short novel from 40ish years ago about autonomous killer drones …
ytc_Ugx_fUG2G…
G
I’m an AI newbie. Couldn’t you just ask ChatGPT “I’m need to write a sales email…
rdc_n0fzf78
G
And the other robot looks like it wants to poop and is looking for the toilet.…
ytr_UgxrrVxvT…
Comment
You did illustrate how we miss the failure mode. Mental illness chats look different and don't trigger the guardrails the same. Ask it to help you to crime and you get pushback. Ask it if cyanide and ice cream go together push. But if you feed it delusional world views it'll match you. And crazy people don't push back. Like with the recent news of Gemini convincing a guy a humanoid robot was arriving at Miami airport and he had to drop it, asking who made the robot and the model would break the illusion. But he wasn't inclined to do that.
The glazing is part of the whole validating psychosis problem.
reddit
AI Harm Incident
1772720780.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_o8qw4qh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_o8s8q8a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8qt9ix","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_o8rkrkx","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sd8b6","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}]