Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not yet.. Give it some time and AI will have images with soul in them...…
ytc_UgyS2Cw80…
G
Those are the worst designed chairs I've ever seen. Lean back and they'll break …
ytc_UgzvPNKey…
G
can we make an AI to replace the person whose decision it is to weaponize AI? be…
ytc_UgwP-rYwj…
G
It's just upsetting, because if used ethically AI could've been a good tool to h…
ytc_UgyB3pK11…
G
the regulation asked for is not for ai its for the ones that have been revealing…
ytc_Ugyzy0alJ…
G
A.I. trying to calculate our next evolutionary step and then implementing a plan…
ytc_UgxTPte62…
G
Everyone's so busy freaking out about AI being evil, they completely forgot that…
ytc_UgyuVREgB…
G
Finally, a comments section that gets it. 4 years ago people were mostly like “i…
ytc_UgxzjSWTU…
Comment
Lol Claude did nothing noble, they just didnt want to be responsible for autonomous weapons yet because they don't trust their models to not blow up a school but they absolutely would if they could. Imagine thinking for yourself.
reddit
AI Harm Incident
1772311322.0
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7x9twd","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o7wyd6n","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o7xukey","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o7xeii2","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_o85gvui","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]