Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks to AI I've been able to learn and understand things independently, maybe …
ytc_Ugwew7YET…
G
Imagine having distant parents and being raised by an AI
Those kids won't be ab…
ytc_UgzDqengJ…
G
The company I work for was going to get a contract for the city we work in to ma…
rdc_fuqj0tq
G
Flawed logic, but not on Tesla's part. FortNine makes the spurious analogy to tw…
ytc_UgwoIdAlg…
G
When people replicate someone else's art through Ai generation, it takes out all…
ytc_UgwlMFKKv…
G
I'm not at all fooled by A.I. I can sniff that fugazi crap out, prima facie. I d…
ytr_UgzFiNuTS…
G
As a real human person born of human parents. I can say for sure that I welcome…
ytc_UgzYygT6z…
G
If autonomous is the future, the question doesn't become how to protect trucking…
ytc_UgyAElhuk…
Comment
It's impossible to have any system created by man immune to the faults of man.
Look at the ideological battleground of big tech: One party says that it's content is being filtered and hidden because of politics. So the operator of that algorithm has to decide how to react. Either they make changes to their processes to address the claim (which adds human politics to the situation) or they do nothing, and are accused of confirming the claims of political bias by the aggrieved party (adding human politics to the situation).
Even in a hypothetical scenario where the original algorithm was objectively perfect and unbiased, any external claim of error by humans would add chaos and bias to the *perception* of that algorithm, to the point where it may be abandoned by human users. It's replacement system would then go through the same cycle of introduction > acceptance > widespread use > niche opposition > growing opposition > collapse and disuse.
Technology for/by humans can never be perfect because humans can never be perfect.
reddit
AI Moral Status
1674148455.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_j4y8mbi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_j4zijki","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_j4ziw8f","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_j50k5uo","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"rdc_j50y73q","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}
]