Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I used slang several times when chatgpt was not answering correct.
After watchin…
ytc_UgyOfaZGI…
G
Drivers have rights. And, we have a right to see the road. If nothing more than …
ytr_Ugxhzhse5…
G
These fools showing us early AI, china, Korea have it all secretly stored to kil…
ytc_UgxKnCROn…
G
MAN WAS KILLED IN PASCO COUNTY DOING UBER EATS WHAT HAPPENED TO YOUR PREDICTIVE…
ytc_UgzTYN7ln…
G
AI is just copying stuff based on existing data it has … ask to make a new style…
ytc_UgyA93BcT…
G
Never interacted with a chat bot yet that was any good to get problems solved.…
ytc_UgxPBfBw_…
G
"(...) resulting in a slew of aspiring AI artists taking up the task of creating…
ytc_UgxwgQUbI…
G
Today’s AI systems learn mainly from human data — documents, images, and online …
ytc_UgyoC3JkZ…
Comment
Here's a [non-paywalled article](https://thehill.com/policy/technology/5686657-senate-passes-deepfake-bill/) on the subject.
Cool. Good to give victims a legal avenue to sue when they are violated.
However, seems like it should also go hand-in-glove with legislation to hold companies making these tools accountable for not doing more to prevent it in the first place.
Yes, I'm aware you can run some of these models locally outside of any company restrictions, which is where allowing the victims to sue individuals will be helpful. I don't believe that should absolve the companies from bearing any responsibility at all if their actual platform is being used in this manner, though.
It's especially indefensible when they don't prevent it from being used on photos of kids. Looking at you, Elon / Grok.
reddit
AI Harm Incident
1768338610.0
♥ 83
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nzfpjh2","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"fear"},
{"id":"rdc_nzf7etr","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"rdc_nzgxjfl","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"rdc_nzg537e","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nzgdpha","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]