Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know what I find funny about this drama. It’s humanity’s own fault that it e…
ytc_UgyxLReGT…
G
Me thinks this a reality ew are about to face. Elon said it best. AI is dangerou…
ytc_UgwF6i9Db…
G
If AI suppositively can't unlearn plagerism, maybe we need start poisoning the d…
ytc_Ugxzr6c0L…
G
Respectfully, the video looses the point. AI can only do one task at a time. The…
ytc_UgyxTq2e2…
G
The problem is that this is not true, at least not now. I work at a firm with ve…
ytc_Ugx8AEOQc…
G
I am a IT guy my job now is to apply automation and ai to replavpce other people…
ytc_UgyiW9ARP…
G
But go beyond that first stage where we lose our jobs - what happens to the capi…
ytc_Ugw7WdCHr…
G
AI is pretty much trash! good for collating some data, but pretty sucks at most …
ytc_UgzKOKvN5…
Comment
I'm not morally arguing about whether AI should police behaviour.
I'm just saying they currently are a long way from even taking the first step in being able to.
reddit
AI Harm Incident
1503166941.0
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dlullrk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_dluejq2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_dlvgbup","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"rdc_dlun7f7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_dlucz7i","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]