Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At school I was so bad that one of the art teachers complained that the only exp…
ytc_UgxQnNG9t…
G
what scary about this is. It will happen soon enough. We will make some clones t…
ytc_UgxuuYIb8…
G
@OneirosNagualDuplas nah. The lot of jobs that will be able to be done by AI but…
ytr_UgxiAmPw2…
G
Ai art shouldn't be considered art. As some may know(Please correct me if im wro…
ytc_UgzS0qmWU…
G
This interview tells me theres only a matter of time before the real end comes. …
ytc_Ugx1UK0St…
G
Love seeing the comments assuming AI won’t hurt us. Interesting to see this now …
ytc_UgyPtsO0D…
G
The problem with that assessment is that if AI does the entry level jobs, you ha…
ytc_UgyB_Ede1…
G
Obviously, it's programmed to apologise, but i find it incredible that it was ab…
ytc_Ugw7e0E2y…
Comment
The intellectual fallacy of AI haters is that they keep parading the concept that because AI fails at something, it is therefore completely useless. This is a flawed understanding. Humans make mistakes too. Even the smartest people make simple errors. Does that make them useless? No. The only thing that matters is if it provides efficiency gains. The answer for AI, even at this stage, is yes.
reddit
AI Moral Status
1765323912.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6k0lj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt75n3n","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_ntaea11","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nt7448h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_nt799bq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]