Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let's stop calling it AI "art" and instead call it by it's real name, "AI image …
ytc_UgwhEqylX…
G
I love this deep fake Tom Cruise a lot of people would love to see him and Tom C…
ytc_UgzH7_nRY…
G
Current tech breakthroughs are pulling families apart. Be it kids who spend mor…
ytc_Ugzas5Jnj…
G
What is this guy talking about... Did he not here about the 28 people killed by …
ytc_Ugxnsl4_T…
G
He wont say it but hes talking about your digital twin and how you are connected…
ytc_UgwSYYrvP…
G
Fucking idiot LLM choosing a hotdog as "its own... Unique creation!" When it's *…
ytc_UgykXLyn0…
G
" Some human beings need to grow up and learn to use technology responsibly, no…
ytc_UgzhTleRW…
G
TBH being able to turn off manners would be the most efficient way of interactin…
ytc_Ugz1WfSKz…
Comment
If an autonomous robot kills someone, the owner/builder/programmer would be culpable for the killing. Thia syatem is already in place, for example if a malfunctioning elevator kills someone there will be some sort of manslaughter charge.
reddit
AI Moral Status
1429553039.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi3z371","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"rdc_ofexdq1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_cqiewbj","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"rdc_cqiklge","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_cqiprrm","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]