Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This scenrio while believable don't account for a couple of factors.
1. At the e…
ytc_Ugy5TsUeD…
G
@LaurentCassaro I'm just your bog standard autistic electrician, who enjoys …
ytr_UgxPbs-or…
G
She does have legal standing i guess.
One possible legal avenue is to pursue a …
ytr_UgySYE21t…
G
AI will never have a reason to exist without a purpose, to serve us. If super in…
ytc_UgzXw6_J9…
G
Training AI Robots the use of Weapons, not a good idea! Just how long do you thi…
ytc_UgzHJOjfG…
G
There is a hard asymptote from hardware limitations. If they make a breakthrough…
ytc_UgxK4SYFI…
G
I don't think that self-preservation in context of language-model AI is right wa…
ytc_UgzORCjkt…
G
OK, but what about the millions of lines of code that Ai can write and taking jo…
ytc_UgyjjDVZd…
Comment
I think a better question would be: why should humans have any right to give AI rights?
We are simply not evolved enough to even understand our own consciousness and state of being, so we are definitely not evolved enough to understand what constitutes a new life form when we see it.
Then again all types of rights are simply regulations made up by people to control other beings. Even with a comprehensive set of rights we still treat other beings like shit, so i think that we would have no right to give intelligent machines rights. An AI would be able to come up with a much better code of conduct for itself and others than humans ever could.
After all we are only human and a machine could make a better set of rules because it would lack human error.
The real question here is: would we accept a set of AI rights written by an AI?
youtube
AI Moral Status
2018-02-19T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw4jv3CEWzURRu5HsN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMfYuv9cf_JLp-n9h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8aRZQRBx68_zKO9l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCiFXiou1vqLS4Dxl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwu5KhSaSD6YSzpwd54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQduWc7dqx2_EgNEd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlzabsKq9ISy3CBTh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOjsFP0aIFzc2ww-54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAr-F4o1eHq2J_kPp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugww6qRG8pzD28KzuvZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]