Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai bros just want to have 'created' something. They don't want to do the actual …
ytc_UgzWYScss…
G
“Slow down” doesn’t signal the driver behind like brake lights do. Tap the brak…
ytc_UgwEh2u7X…
G
i went eren on chatgpt and tried to convince him to take over the world…
ytc_Ugxjt5vPT…
G
Think they would have a hard time answering “are you a robot!” No individuality…
ytc_UgxNxgt6K…
G
"Why don't you learn how to draw or pay an actual artist"
Well the sad truth is,…
ytc_UgwkkCwJW…
G
Last time i saw a very scary ai picture
It was a woman doing yoga... One of her…
ytc_UgzMOZ9fU…
G
As a wheelchair user and an artist I find it absolutely disgusting for disabilit…
ytc_Ugyw2nEDB…
G
The AI is literally stealing from existing art in order to function. The idea th…
ytc_UgwrVlubR…
Comment
As someone who understands training data, surely you realize just how much of said data would have to be verbatim OpenAI canned responses in order to [repeatedly and reliably generate output like this](https://twitter.com/JaxWinterbourne/status/1733364034219454636) in a foundational LLM model, right?
reddit
AI Harm Incident
1702169791.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kcq75de","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kcnu4gn","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kcpjhfb","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_kcnbiab","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_kco4d98","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]