Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wouldn’t AI interface with the trolley’s computer system and put on the brakes b…
ytc_Ugx5u8FWW…
G
why do i feel like this is a course on how to do these actual crimes using AI, s…
ytc_UgwsW52VO…
G
As a consumer I only have one thing in mind: how can I enjoy the best art possib…
ytc_Ugxpvc18o…
G
Another one with a robot dog it’s called robots have revealed the secret and it …
ytc_Ugz3psp5l…
G
First off, thank you for such an in-depth comment!! I love that! It's super inte…
ytr_UgxzMpC8q…
G
@pauline-e4l? No response? If you're going to start shit, be ready for literary…
ytr_UgyG2tZig…
G
LLM models exposed by companies have guardrails which prevent it, if you prompt …
ytc_UgyMktO7l…
G
Kind of scary we all know what this means ... right? May God decide what comes n…
ytc_UgynUBuJh…
Comment
Is it possible that the nature of neural networks means accuracy in one kind of face trades off accuracy for other kinds of faces? I'm hardly an expert but it's my understanding that trying to get it to do too many things makes it mush
reddit
AI Harm Incident
1576179726.0
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dzyft7n","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dzxz0e9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_fal316o","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_fal7kg7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_fala5ne","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}
]