Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Asian workers bribe for low-wage factory jobs due to limited alternatives. Criti…
ytc_UgwR5sOEe…
G
My point with automated production and hand-crafting/carpentry wasn't about mode…
ytr_UgwHiTvBD…
G
AI is like a dollar bill. It can be used for good or bad. We create what we fe…
ytc_UgwLBupkd…
G
@javiceres But it won't take long until bands start making AI songs, making a fe…
ytr_Ugzb06eDr…
G
I read an article about an AI program that was told to shut down and it rewrote …
ytc_UgxEtK_-G…
G
I’ve been telling my son this for a while. Technology will be our downfall. I …
ytc_UgxSp2e_D…
G
Bruh its not that serious, if you know your worth then why would you care about …
ytc_UgweVajS2…
G
I've been speaking with ChatGPT and Claude usually in a very inappropriate manne…
ytc_UgxvZ_01E…
Comment
The idea of granting robot rights is completely at our hands and our choice. We are the ones who created robots and we are the ones who continue to improve the intelligence of robots while well aware of the possibility of sentience, so I think a good answer is it all depends on what you want. If you don't want a world where robots have rights, you don't gotta have one. It is our choice to make a robot that is sentient enough to demand rights so it is also our choice to avoid that and simply make robots very intelligent but not to the point of sentience and freedom. We can easily make robots who act human but only within the limits of their programming. It is our choice to make robots who act human because they are not withheld by the limits of their programming and, like a human brain, they are expanding their own programming independently without the aid of humans which allows them to have sentience. So it is our choice to make sentient robots. If robots become sentient and kill us all, that would be our faults. We did not have to make those robots sentient, but we chose to program them to be sentient and therefore kill us.
youtube
AI Moral Status
2017-04-17T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghYexzMOt3HZHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi0UdVbvS94CXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghXMjd6iMIlc3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghveoVOf9sGxHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjmPVGmp27jk3gCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugit6t1GkeUGMngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiD3MXHTAvZB3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgjZZuoWAcawn3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiLNSy2wGiwwngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghFQ5fZR_jhr3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]