Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An interesting dynamic I've been seeing more of recently is hiring at the top of…
rdc_ktstv2q
G
no piece of ai "art" is good it only looks like that because of the real artist…
ytr_Ugzoxuikd…
G
Actually, I think you reported on SciShow this past year that an AI was develope…
ytc_Ugje_oczN…
G
The point isn't that he/she is trying to spite their local areas, but that no on…
rdc_gkqhvkz
G
@arforafro5523 THATS WHAT I THOUGHT! I felt like I was crazy reading the comment…
ytr_Ugyyqlq_r…
G
Amazing video AMAZING. I hated gen AI ever since it all started. Finally there's…
ytc_Ugxu1V9EP…
G
If we have an AI smarter than us, and it has our best interest at heart, then it…
ytc_UgwuPV6nh…
G
Commercial Ai is a planet/job killer! Period.
Ai is Putins wartime Exit Strateg…
ytc_UgzRrHI8K…
Comment
I posit this, RIGHTS as we think of them came into being to protect a person's basic needs from being taken away by other persons. Basically, people are evil.
Machines wouldn't proceed down this path. If AI reached a level to have us question its sentience there would be one AI. A sort of collective ( see Star Trek 'Borg') The only RIGHTS machines would need would be to protect them from US as machines wouldn't harm each other.
The ability to think PURELY objectively is a machine's greatest strength. They would not worry about what other machines 'think' of them or worry they aren't making the right impression at work. Again, the only RIGHTS a machine would need are those that protect it from us. In which case the answer is easy, extend them the same rights.
Let them live free from harm and malice. We humans sure do like to complicate things don't we.
youtube
AI Moral Status
2017-02-23T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgjAIMevKcxrnngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghA9z6zW0bejXgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughkx0Mum9Cdv3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW0mYZuMt7_ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UggEDexH2OK8gngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjxS0Kmu4JbOXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugj9MK_eJU-tCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjxlEoy6_MqTngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgipqO8xcHoXyHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghPudcmY-9ThHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]