Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The question is "Will humanity put more value on AI than on humans regarding cri…
ytc_UgznPoK-W…
G
i use ai art(from so called artists) and change it, not tracing but i use the ch…
ytc_UgxBWmOpk…
G
That video did seem fake upon watching it since there was no real fast moving pr…
rdc_nc97ut3
G
Yeah, full counter-strike probably totally unnecessary. Could you imagine the i…
rdc_dl0qvge
G
AI can actually be really useful for this sort of thing. Diagnosis based off of …
ytc_UgyKicXdi…
G
I am a futurist.
He's wrong. And I can provide it right here.
What will humans …
ytc_UgxuSLony…
G
Orange Flame Just the fact that it could kill us easily is what is wrong with it…
ytr_Uggpj4TNs…
G
I use ChatGPT for all my email responses. Like I’d write my own first, then ask …
ytc_UgxuqoAih…
Comment
I think you hit that one right on the nose there. And that also stands in line with what the article gets to: Think selflessly when defending moral imperatives, not selfishly
reddit
AI Responsibility
1617657797.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-25T08:13:13.233606 |
Raw LLM Response
[
{"id":"rdc_gtcv3tq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_gtgvc1s","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_gthys76","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"rdc_gtor2im","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_gtr25qq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]