Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But isn't that the end goal of the ELITES? To have half AI/human species? Easier…
ytc_Ugwt2jtHK…
G
Matrix, but without happy ending. That’s what can come. And no one would or coul…
ytc_Ugwbt1Iec…
G
I have one of the most complex jobs where I work and I hope I can train an AI to…
ytc_UgyH9WplS…
G
I'd consider waving my arms when riding behind a tesla, maybe the AI will read m…
ytc_UgxWnEcFP…
G
I think...AI will take most of the Jobs in the future...and will force people to…
ytc_UgzOXEb2x…
G
Are you saying all this to prove a point or you're just bad at art in general? J…
ytr_UgxniQBZe…
G
It's not that it's quick and easy that makes me hate A.I. art, it's that it crea…
ytc_UgzNvD5he…
G
Narrative Context Framing (NCF) is a technique that uses coherent narrative stru…
ytc_Ugw2jM2vt…
Comment
They’re still safer than human drivers by a long shot. The corporate responsibility here is tricky but any responsible citizen should want to see cars replaced with Waymo’s, they save lives.
reddit
AI Harm Incident
1765216631.0
♥ -3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt04mbm","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_nsz08x8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt0magr","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nt2s0rz","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_nsz2slk","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]