Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i typed the description into clever bot here's what he/she/it had to say "No tha…
ytc_Ugx51Nx0a…
G
If I tried to "prompt hack" a human into reproducing a New York Times article th…
ytc_UgwJi-w_o…
G
its dangerous mentally , and knowledgeably , we basically get our brain dead bec…
ytc_UgzoJtaRJ…
G
It is scary because what we see is not the real thing !We must have somebody to …
ytc_UgwzRsr5-…
G
Im sorry but no ai gonna steal that tbh they only steal like proffesional arts (…
ytc_UgxJ0AwS9…
G
Well, that is scary, but we now know who is no 1 on AI's hitlist.…
ytc_UgzEv80K8…
G
If AI will take up all the jobs, how will people earn and afford things. And if …
ytc_UgwgU2mCp…
G
AI's end goal is to trigger its reward signal. why doesn't it just find a loopho…
ytc_UgynW0IHT…
Comment
I just tried this using a listing from autotrader and it worked very well.
I notice that you used several "remind me" statements to surreptitiously include some of your advice to your daughter to be regurgitated by the AI, nice move.
reddit
AI Harm Incident
1751281502.0
♥ 183
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n0m19s6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_n0k2yc3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_n0kshvf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_n0ljlx1","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"rdc_n0lolop","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]