Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the filter was probably automatically added to the video call (i think it's a vi…
ytc_Ugz-O-nwo…
G
the one with the endless loop you showed chatgpt saying no pull with a red dot b…
ytc_UgyO8giC7…
G
It should be global but nobody will share their models.....you've got all the ba…
ytc_UgxvOl0Rx…
G
Don’t blame this on AI. Google started this more than a decade ago…..why pay for…
ytc_Ugym7kyBW…
G
Sorry but I'm squarely on the pro-AI side in this. Mainly because most of the on…
ytc_UgyE1tXhB…
G
Elon Musk is building a robot army. When he got his hands into our Government pr…
ytc_UgxxsXUf_…
G
Will we reach AGI was the most important question and he just said yes we will. …
ytc_UgxENsF04…
G
I love how these people still think they're the main victim of deepfakes when in…
ytc_Ugwetk7FC…
Comment
There's an argument that the prospect of collateral damage has also prevented more trigger happy solutions.
A drone has no consciousness, no moral compass, no accountability. You can basically now order murder *a la carte*. With reduced repercussions.
reddit
AI Moral Status
1616690151.0
♥ 75
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gs6njun","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_gs6siaz","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_gs76fdm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_gs61v88","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_gs63shw","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]