Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Please stop 👇
Negative Impact: Climate Change & Habitat Loss:
Energy Intensity:…
ytc_UgwICpQHu…
G
@MirkoVukusic in that context LLMs still suck. Anything that becomes sufficient…
ytr_UgxvMpPop…
G
I admit I know very little about AI, but if Sydney is/was getting a bit"uppity",…
ytc_Ugzg9GV5x…
G
While I understand the sentiment that I would prefer generative AI be launched i…
ytc_Ugx0wzZh7…
G
@TheMikesylv that's just the text-to-speech, it has nothing to do with the LLM (…
ytr_Ugw1eoycg…
G
Well you know what's going to happen people are going to buy them and use them f…
ytc_Ugz81xPWb…
G
Could be?
Is, Phantom, how much of a threat AI is. Because here’s the kicker- …
ytr_UgwYJAAKt…
G
if you force an ai to say yes or no to a question that does not have a yes or no…
ytc_UgyZlLkD7…
Comment
He was the guy warning about AI and then went out of his way to make his warning relevant.
reddit
AI Moral Status
1752059640.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n2341s3","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_n22nb9t","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_n25l7gf","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"rdc_n2303sf","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_n2477yh","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]