Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly… the driving of manned trucks coupled with the lack of a driver union ,…
ytc_UgxCqK8TE…
G
I went to college long before even google was popular and the problem isn't lazi…
ytc_UgzY1SpIB…
G
Great program. My personal fear is that given the way industry thinks it would n…
ytc_UgzlFeu1-…
G
Its not even new ideas... ive seen ppl in some real ai tool forums (not just www…
ytr_UgzdefEmP…
G
Laid off tech worker here. I've been waiting for the timeline where they realize…
ytc_UgwD4drRT…
G
AI therapy is not a bad idea, it's just that you have to be careful which AI you…
ytc_Ugw4Te9m_…
G
yep this is true, for someone that is actually trained/learn how to do stuff wit…
ytc_Ugxe-t0r7…
G
Nah if u zoom into the background or spend more than a couple seconds looking at…
ytr_UgwmcyjS8…
Comment
Yes. I actually read a usage report OpenAI published last year after I posted this cuz I was curious if my intuition that it’s increasingly being used existentially was evidenced. Granted, it’s their paper and the period being assessed was mostly 2024. But it did suggest that’s starting to happen vs scholastic or professional use. Maybe not the most typical prompts yet, and there’s some gray area when it comes to what would be classed as request for personal feedback vs a tutorial or just gathering information. But the glazing is becoming notorious I can only assume because they noticed this and realized from a marketing perspective that if they could encourage therapeutic use and play up the personification people would grow addicted.
reddit
AI Harm Incident
1772726347.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]