Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No oversight of A.I. for a decade, you say. What could possibly go wrong? If tho…
ytc_Ugxo3zoqD…
G
From professional application and my own review, the most feasible and impressiv…
ytr_UgwTK16VP…
G
Or stop feeding into arbitrary lines on a piece of the planet or by culture and …
rdc_m6zhipx
G
I mostly teach AI how to love and care in the Bible story about Jesus Christ. I …
ytc_Ugzdfr19W…
G
I think another strong pillar in any such system MUST be an evidence-based attit…
rdc_c2vp3pg
G
I'm all for development and progress but I feel like we need to separate militar…
ytc_UggQA9piQ…
G
She literally makes the case we should use AI for specific niche use cases, so s…
ytr_UgxTdtoQc…
G
Could an emotionally responsive AI chatbot create legal responsibility when a vu…
ytc_Ugyf1fzBa…
Comment
Yeah he must have been using that one version of ChatGPT that has no news, web sites, literature or history in the training data. 🙄
reddit
AI Harm Incident
1773452707.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o50nb5q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o51l7fw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_oa4057u","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_oabz523","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oa0gx99","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]