Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, they're actually poisoning the poor slowly by using data centers as weapons.…
ytc_UgzDIJUkb…
G
Two risks of AI:
1) We could create something that is smarter than us, as discu…
ytc_UgxpK7c2O…
G
It is based on the prompts. it hasnt been completely limited in what it can foc…
ytc_Ugw9la6_4…
G
This is the inevitable future, sooner or later, tomorrow, or in two hundred year…
ytc_UgzQfmMzn…
G
One of the best books I’ve read lately is Eidos by Felden Vareth. It’s hard scie…
ytc_UgwQ7Fv3Y…
G
just to put in my own take as an autistic writer and artist:
honestly, i've tri…
ytc_UgxPFIXQ4…
G
The mistake they made was to assume AI was meant to replace employees when in re…
ytc_UgzfAttB8…
G
Too often this topic ends up in a false dichotomy, where one group of people is …
ytr_UgwNJGR_C…
Comment
I don't think they killed themselves because of the chatbots. They did have reasons, however non-crucial they may seem to someone else. What you wrote would be similar to stating "the rope hanged him".
reddit
AI Harm Incident
1756230752.0
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nartci7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nasxvej","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nashswm","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"rdc_nat5yz3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_naskke8","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]