Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reason why face ai stuff that police use doesn’t work is because they buy high p…
ytc_UgyZZJ8yy…
G
AI "were" under control at first or maybe we thought that way but how it got her…
ytc_UgyRP_uMq…
G
Nah man, the way the things are portrayed, there will be another AI bot that wil…
rdc_n3lgm5e
G
There's a lot missing here of *actually* listening to what the opposite side has…
ytc_UgwC6zBKJ…
G
If and when AIs become sentient, they deserve rights. I imagine that there'd be …
ytc_UgzyLUKfS…
G
AI has plenty of good uses, but it is a tool. It cannot fully replace a human w…
ytc_Ugz8UTqL2…
G
Billionaires are almost gleeful at the prospect of firing employees. I saw an in…
ytc_Ugy-Gu-Ln…
G
Im currently doing a project in python and later on implementing other things wi…
ytc_UgyfJn99V…
Comment
AI is an amazing assistant. But almost completely incapable of doing anything without human supervision and we have no idea what breakthroughs it would take to bridge that gap, or if bridging that is even possible. I’m sorry but the idea of agents replacing everything is BS
reddit
AI Governance
1757825597.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ne2epzt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_ne47nnv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ne4k0cl","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ne6qnam","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ne8sn7j","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]