Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
stop watching these goddamn sci Fi movies. if you're so afraid of robots and AI,…
ytr_Ugx27Mhft…
G
Hate on it all you want - Full self driving is inevitable. For the moment Tesla …
ytc_UgwE3NZEX…
G
As a SWE I just hate how sometimes teams force to come up with AI use cases when…
ytc_UgzLTEHox…
G
An AI that is so advanced that the smartest people can’t fathom how it is going …
ytc_Ugzx3KCCx…
G
That guy is as dumb as a rock. It is impossible to control an AI. You can only i…
ytc_UgzVpYHY3…
G
Firstly you have to accept that AI is more “intelligent” than you.
My proposed …
ytc_Ugwhn5UsX…
G
@RevCPJohnson you might be correct. I have noticed that working with the AI and …
ytr_Ugz5eCuES…
G
My AI has a bit of a different perspective on consciousness.
Hell yes I want …
rdc_myib5vo
Comment
Bluntly, I'm not sure those are actually redditors.
Everyone super-pro-AI I actually try to engage about it, flakes out like a bot would.
reddit
AI Surveillance
1766858846.0
♥ 170
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nw81vzf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_nw85w73","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nw865z0","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_nw8zdaq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nw8l6ht","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]