Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, in the future, we should make sure that trucks hold their objects better a…
ytc_UghemGmrq…
G
We Don't Want AI .
.We Don't Want AI .
.We Don't Want AI .
.We Don't Want AI .
.…
ytc_UgzG8IYUR…
G
You just can feel the lack of care for what was done when it is made by AI, even…
ytc_UgzNDx8CA…
G
a god damn china virus who support CCP use AI to slave human in china.…
ytc_Ugz1exrWc…
G
@roycampbell586not if you put your own local model large language model AI on y…
ytr_Ugxsu1BlJ…
G
What if you're actually just talking to an ai but it's been thrown through multi…
ytc_UgxA-SmEb…
G
If you had to choose, I would choose polar bears. There are only 25,000 polar be…
ytc_UgwURQQyE…
G
The government programs AI . AI programs you and replaces God in your life .…
ytc_Ugw-wVrCf…
Comment
Guess that's what happens [when you steal your self-driving tech than actually try to develop it yourself](https://www.bloomberg.com/news/features/2017-03-16/fury-road-did-uber-steal-the-driverless-future-from-google).
reddit
AI Harm Incident
1491342831.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dfu26yo","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dfu8qoc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dftvnyo","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"unclear"},
{"id":"rdc_dfu347c","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_dftia6a","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]