Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want to put focus on something, it is unexpected or maybe expected that Asmon …
ytc_UgzryeUkM…
G
Just imagine if you can copyright strike the 1 profiting wIt AI
Just imagine h…
ytc_UgwZWWB9f…
G
Nobody thought chatGPT was going to reply with "yes" when she asked if it could …
ytc_UgzUXRDvP…
G
Eric Schmidt may not work for Google any longer but he sure is their propaganda …
ytc_Ugzn6uCaw…
G
With better management, Tesla’s billionish miles of driving data could have been…
ytc_Ugxpj7VcM…
G
I maintain that they are different versions of the same error. I disagree that t…
rdc_djgqu1k
G
I personally find landmines to be a far more morally reprehensible weapon that o…
ytc_UgxA1b4Ce…
G
We need a law that at least requires AI to be identified as such when they call.…
ytc_UgxJAc4QA…
Comment
Just wait until someone gets the idea to use this sort of AI to craft tons of legit looking news sites filled with disinformation. Things are going to get far far worse.
reddit
AI Responsibility
1576893600.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fbi16w1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_fbi88xa","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_fbio22g","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_fbiejdg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"rdc_fbihe9o","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]