Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Such a ludicrous statement. Why bother with something that will destroy us? What…
ytc_UgyeT6_WF…
G
It’s so strange to think about all the ‘AI-will-end-the-world/ AI-will-end-all-o…
ytc_Ugx7yPEmT…
G
I saw a WAYMO car make a u turn in a very busy intersection almost hitting a car…
ytc_Ugwx43Gdo…
G
It doesn't have to be. Look at people on the internet, when it comes to creative…
ytr_UgxgNmbd2…
G
I worked with ai back in 1999. It's a mistake to think it will only replace huma…
ytc_UgwORgEYZ…
G
Fascinating that essentially A.i uses the same principles of xeno linguistics th…
ytc_Ugx0o9iGh…
G
"I wish I'd spent more time with my wife"
That's the basis of what should be dif…
ytc_Ugw0KRDnD…
G
@Cyborg_Leninwhat is an "AI artist"? If you mean just giving AI a prompt and cl…
ytr_Ugy6aWEsb…
Comment
This is the problem with how people approach social media algorithms. Their mindset is, "This is not a good thing, therefore the algorithm is doing a bad thing". Search and recommendation algorithms are designed to cater to what appeals to the individual user. They're not supposed to sift through societal morals and be a techno parent. When an algorithm stops catering to the individual interests of the user it's servicing that's when it loses the functionality it's intended to have. If a user doesn't see things they're interested in on the site or app they're using, they leave.
reddit
AI Harm Incident
1628608748.0
♥ 89
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_h8evtvw","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_h8f2dae","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_h8hwuxh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_h8f1lhd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_h8f3jzj","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]