Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First, AI increases profit in human markets: it predicts demand, reduces costs, …
ytr_Ugw_ApDXY…
G
So what stops the AI that we have now to go ahead and build the super Intelligen…
ytc_Ugz831bKu…
G
This is really weird, those names-Susan Verghese, George Scaria Verghese, Anish …
ytc_Ugws1nWd7…
G
A legion of AI mules. It's the death of substance. How much will a meaningful …
ytc_UgwtMvypO…
G
He just doesn't like to think about what would happen?
That is what you should b…
ytc_Ugy35h-sl…
G
Banning all ai art is misleading as what happens if you like art but can’t draw …
ytc_UgzkAebuO…
G
Sorry to break this to ya bro. But the issue isn't that coders will be completel…
ytc_UgxJnGCAi…
G
Basic knowledge from primary school, about organic life - protein, lipids or liv…
ytc_UgwSwocaC…
Comment
AI rights is likely going to be a human issue more than anything else. I don't believe that a clear cut Organics vs Synthetics type of conflict will happen, but rather human factions with wildly different views on AI rights will be the ones fighting over it, at least at first. Fallout 4 really does a pretty good job of summarizing the three most radical viewpoints. The Institute, the Railroad, and the Brotherhood are all human led organizations, two of which have a substantial synth numbers, and all 3 have such radically different views on self aware AI that they end up in open warfare over the issue.
It probably won't get that bad in the real world, but once self aware artificial general intelligence becomes real, there will be strong factions that arise that want such AI to have rights and freedoms, those that believe them to be no different from the narrow AI we have today, that are essentially just tools, and those that feel they are abominations that should not exist. Regardless, it's likely humans will care about this subject more than the AI will, at least for a while.
youtube
AI Moral Status
2017-02-24T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugh4xkVi4MfetHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggLQKwVGkmGH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UggAnOn8fXWe_XgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgglPt9FSMOxZHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgijavkW4w4I8HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugg4Od1C-VYHqHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggUKbdXKJJrMngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggEZyTQU4SE3ngCoAEC","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugiata-MDSuPkHgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UghrBLrWi9JmwHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]