Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are some jobs AI can automate, and they are usually crappy jobs. I’d be re…
ytc_UgwgN-yK_…
G
Where I REALLY want to see AI art succeed is interim frames in animation. AI (li…
ytc_Ugw2JIGQ6…
G
Well that's the most dangerous part. The people who have access to the ai are th…
ytc_UgxPicgdQ…
G
I dont think we need worry about any other Nation if Tesla has an army of these …
ytc_Ugzdypi8D…
G
With a 99 percent unemployment rate, the ruling classes cannot have a huge popul…
ytr_Ugy-TFz0v…
G
For those that just want to know what the five jobs actually are - first off, he…
ytc_Ugxt1nc7c…
G
Most of us will likely die, and billionaires will evolve as a new species using …
ytc_UgzkkJ7FQ…
G
I like how the character Michael Knight from that old TV show 'Knight Rider' int…
ytc_UgxrEf8pT…
Comment
The issue of machine rights would only come up in the singularity scenario where an AI becomes truly self aware. Until that point is reached there will be tons of limited AI with a rudimentary self-teaching algorithm (like Siri) but they are still machines with no other function programmed into them than to find ways to more efficiently do their jobs and that does not mean they have consciousness.
In the case of an AI that achieves sentience then logic would assume that it would have rights as sentience is a valuable resource in a universe. But we all know that logic is not the primary factor in deciding rights and laws, morality plays more of a role and morality can be twisted and turned from "It's moral to not kill your neighbor" to "It's moral to work the people of race X to death" in the blink of an eye.
youtube
AI Moral Status
2017-02-24T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggJIup0iIlZVXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugiqorz5t1QhRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UghZ5Le5QNo9W3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj2YPylz7gmH3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiIQ5CNwZV0VXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggW5A_hvTuZv3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj0GWYELnqn_HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugi37YvVMkNA3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjFDOQXOgm_-HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjVqIuTCm8kfngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]