Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
honestly this ai thing is too much. like sam said, artists MUST have an option t…
ytc_Ugxzaf4xN…
G
So... you're just making other people life more difficult and slowing the AI lea…
ytc_UgwRI-moq…
G
Hey prof dave, I have a question. How would AI do things like exploit electricit…
ytc_Ugwju9H47…
G
Don't equate time to success. You won't become rich if you work 20 hour days. Do…
rdc_dtaau2n
G
We already have a word for when somebody gets a piece of art made for them based…
ytc_UgyLkhKLp…
G
He’s a damn fool to give much power to AI to write a code and if AI writes a cod…
ytc_Ugwe7S-4O…
G
Due To The Fact,
The People Refuse To Change,
AI Needs To Take Control.
Greed & …
ytc_UgyHtWns-…
G
I've recently coined the term: AI, advanced interpretation. Cause it's not intel…
ytc_UgzPjF1za…
Comment
Relative risk is not something most people really grasp. In 20 years, if self driving cars become ubiquitous total traffic fatalities in the US will likely drop by 35,000 a year. And yet if one self driving car makes a mistake there will be headlines and demands they be banned.
reddit
AI Harm Incident
1765217843.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ogs4vry","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_ogxj086","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_nsym4wr","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nsz4eoi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nsyn7u9","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]