Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> In one of the more harrowing episodes, a man waved a .22-caliber revolver a…
rdc_ed0p4o0
G
Maybe we're doing a disservice to everyone by insisting on calling it AI. AI sug…
ytc_UgywK2Ux2…
G
The thing is, the AI is trained on past data about choices doctors made. So the …
ytr_Ugzl-i878…
G
You have summarized the arguments put forward by AI-bro’s to defend AI very well…
ytc_Ugxgibfxf…
G
Note: neuro is a project and I cant really explain it well but I can tell u that…
ytr_Ugwsljen-…
G
I’ve worked with GenAI for years and I still see the same shortcomings. Specific…
ytr_UgySe1ehn…
G
J'ai apprécié ce reportage.
Si je devais avoir un entretien d'embauche avec une …
ytc_UgxsZnyKR…
G
Welp here comes the nightmares lmao... I genuienly am scared of whats to come, n…
ytc_Ugw929onJ…
Comment
Considering he doctored the prompt, I think this is like blaming the rope, however I don't like this trend where the first advocacy for this type of stuff is just to go hard on the parents for being 'dumb' and blaming GPT.
I think companies SHOULD be required to defend things their generative models do. We're working towards a future where ai may be and is currently deciding whether someone has health coverage or not.
We're consistently seeing stories where the ai is being talked about like this independent entity, absolving the company of any liability or responsibility on their part. I think that's a bad direction to go in.
reddit
AI Harm Incident
1756298282.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_nasjx3o","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_nau03wj","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_nau30to","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"rdc_nau8a2h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_naxrqoq","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"})