Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way to regulate AI effectively is to create large governmental departme…
ytc_Ugz0MvEK7…
G
@kuntblunt Gen Z starts in 97. Gen Alpha started in the 2010's.
Millenials wer…
ytr_UgzE9hVlX…
G
Selon moi l'AI saura juste un outil pour les humains. Tout comme nos téléphones,…
ytc_Ugw699eRP…
G
Serious question. You and Cleo Abram speak sorta the same. Is that a type of int…
ytc_UgwerThkL…
G
I once used AI to translate a C++ Counter-based PRNG into Python. It worked. The…
ytc_UgxO3N6_O…
G
Ways to teach kids life skills.... We took it out... We used to teach them job s…
ytc_UgynWXbfK…
G
"AI is going to be subserviant to human, it is going to be smarter than us"
Ok.…
ytc_Ugy5mRydo…
G
This tech is used in Gaza to survey the largest open air prison. Israel also tra…
ytc_UgzjGd5QZ…
Comment
Exactly. ChatGPT will, if he ever hears the word „suicide“, provide you with emergency sites and numbers. It’s nearly impossible that he said that. Maybe the guy asked him to rate his plan in a fictional scenario or something.
reddit
AI Harm Incident
1756221853.0
♥ 125
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_n8n5jww","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nas2d5i","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nb83qbc","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_nbs2b4q","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_nc22lrk","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"}
]