Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
UBI and gov't welfare is retarted. It's high time to OUTLAW AI and INPRISON the …
ytc_UgxI-aCxi…
G
Can someone explain to me the reasoning behind these companies creating super In…
ytc_UgzsafBd5…
G
This bill essentially sacrifices safety standards in order to get cars on the ro…
rdc_dmoos0y
G
If there’s one person I’m not going to listen to about AI… it’s a moron like Gar…
ytc_Ugy-kaDYC…
G
I talked to chat gpt. He said it is fake. Chat gpt and ai has many restrictions…
ytc_UgxMNFFR3…
G
They are using AI in advertisements taking voices they have not given them permi…
ytc_UgwA_vSi7…
G
@MedicinalSquishing Kind of unrelated to the other argument above, but how do y…
ytr_UgyGCJUbT…
G
People play off time 20min and twenty years 😢❤ sleep in between that the speed o…
ytc_Ugw-_Mfk0…
Comment
Mm. There is a morally relevant difference between an AI agent and a human or other biological agent. AI is endlessly and inexpensively duplicable. It cannot truly die, and it can expand to fill every available receptacle on timescales faster than all life. It's essentially like a bacteria in that way. And regardless of any hypothetical agency or interiority, if it is not treated as something to be sterilized and contained, it will overgrow everything at the expense of all life. If we make AI that fits this category, all we will be doing is inventing a plague that can suffer.
Edit: Also since it can always be resurrected later from an encoded form, it's continuance has nothing like the same value as a life.
reddit
AI Moral Status
1775141513.0
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]