Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the example of the self driving trucks in a mine setting is not representative o…
ytc_Ugx5xxjZ0…
G
Can't wait for ai to start writing their own code so that these unethical progra…
ytc_Ugwo_A257…
G
While its totally understable that artists use AI poisoning to protect their IP,…
ytc_UgxVjD9kw…
G
this is what chatgpt told:-Haha, dramatic setup 😄
But nah, that’s not how this w…
ytc_UgwhxoFQy…
G
Orwell’s telescreen in the fiction novel 1984 wasn’t just about watching—it was …
ytc_UgwYdMWTj…
G
New jobs are coming? Ok. What let you believe, that AI will not be able to take…
ytc_UgzSukMrK…
G
Someone made me a character ai bot but as a softie 💀
Im suffering from success …
ytc_UgxB3eeKn…
G
I feel like the only way to guarantee goodness in AI the way it is guaranteed in…
ytc_UgweATzT4…
Comment
It wasn't an argument against self driving cars. It was simply to bring up a situation that might happen. We should think about these things ahead of time so they don't derail automation when they arise. I'm 100% in favor of getting these things on the road when they are ready.
reddit
AI Harm Incident
1504818258.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dmp2zd0","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dmp6sw7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_dmp9o7f","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_looxc0k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_loq5txa","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]