Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I apparently is a big fan of Dr. Zeus books. I mean just listen to the story.…
ytc_Ugz_qZW1i…
G
The problem is that as AI improves (which will happen very rapidly), AI will be …
ytc_UgwzPZ6tI…
G
There's no thing such as "Ai style". It ALREADY was trained with real art, so it…
ytr_Ugw2h_kpP…
G
Why is the robot designed to crush the vegetables. This seems like a design flaw…
ytc_Ugyb20S_p…
G
It was interesting until the guy admitted he was a Marxist, and then she started…
ytr_UgzZLU8gF…
G
Well, we all need things to survive. People need food, plants need sun and water…
ytc_Ugz4EpJs7…
G
At first I was worried and discouraged from pursuing art as a career with this A…
ytc_Ugwfx_lnX…
G
the fight over ai art have always been a bit dumb to me people are complaining o…
ytc_Ugz9j9VVO…
Comment
When robots come to surpass humans, what need would they have for rights from humans? Rights, after all are entitlements or benefits granted from one party to another. In human societies, rights are predominantly conferred from humans to humans or other entities. However, as artificial intelligence develop autonomy and agency, it would be reasonable to suggest that their autonomy and agency would render them indifferent to human authority. Artificial intelligence woukd likely not seek rights from humans in the same manner that humans confer rights onto others. The question is then, not if we should give robots rights but why should they give us rights, when it comes to that point.
youtube
AI Moral Status
2024-01-27T05:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7yxlRErHmvokpT794AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTjOc0YAD8w5VPVjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwtFvqRYKCfbEN5yiN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2K934SsLLKWgPuf54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwTRaYUjVnwDc3lsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlfKYuxpHda6ez0ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw61j2xjzxWZv2cR3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkuWWV6H3adxBmlnV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8FuZm8w4XH0Rxx1l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx9cuzBVLoEZ_pSYC54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]