Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People have already been doing this, artstation is/was 'bullying' the site admin…
ytr_UgzVVIkDS…
G
Abstract: The Hintonian Ghost Gang
This paper formalizes the "Hinton-Blinky Para…
ytc_UgxJYEki3…
G
Man we should ban open source AI. It’s going to make the world an objectively da…
rdc_l26xwtg
G
5:19 damn must be nice 😂 though i will say i much prefer fedex over amazon. We’r…
ytc_UgzWF10OS…
G
If AI doesn't make mistakes it must be your poor decisions that are pioneering w…
ytc_UgzwBRxiD…
G
How to know if your students are using chatgpt to cheat on papers and projects? …
ytc_UgwT0zn6B…
G
Lol but he is thr CEO. Is this a warning or a promise? Hahahhaha. Most likely jo…
ytc_UgyI2bL74…
G
I mean, at least they can't harass people in the streets for engagement. I would…
ytc_UgyA8SeUq…
Comment
Yes. I asked Chat to review website terms and look for any differences between the terms on the site and the document I uploaded to it. When it identified all sorts of non-issues between the documents, I got concerned.
So, I asked it to review the provision in each document on “AI hallucinations” (which did not exist in either document). Chat simply “made up” a provision in the website terms, reproduced it for me, and recommended I edit the document to add it. It was absolutely sure that this appeared on the web version. had me so convinced that I scrolled the Terms page twice just to make sure I wasn’t the crazy one.
reddit
AI Harm Incident
1747012683.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mrtgd8d","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_mrubyeu","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"rdc_mrtafeh","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},{"id":"rdc_mrulpjd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_mrtcjne","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]