Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
France has a strict no autopilot usage. The transportation safety ministry will …
ytc_UgwBPmTdj…
G
Those in power will only grow there own power.
We as a species are disgraceful.…
ytc_Ugz19wLTx…
G
i am an electrical engineering student so i dont really care, however, ngl, buil…
ytr_UgwvFRzsx…
G
At the grocery store i work at, we have a facial recognition software that detec…
ytc_UgzukYkCc…
G
We need AI to verify AI! I think the Gartner hype cycle needs to be understood a…
ytc_UgwEBds7A…
G
When Ai is left talking to Ai without some control we could be in trouble! If t…
ytc_Ugx1dld6y…
G
it isn't that much regulation as long as there is no human life in danger all yo…
ytc_UgxZD2nmS…
G
I work in customer support within financing and we have an AI chat bot. Nearly e…
ytc_Ugyk-dyD8…
Comment
Unfortunately this video only addresses the ethical questions (while leaving out legal ones), but forgets mentionining that it is doing so.
Even if there would be an absolute truth, that they wouldn't deserve such rights now or in the future (which I don't believe will be the case), humans may still conclude that it may turn useful to grant them such rights. We have handed out legal personality to apparently non-sentient entities in the past (companies, states and even to the Whanganui River) because it was deemed a useful construction (at least by the governing powers).
I know that similar discussions are currently underway in legal sciences whether it would be useful to extend legal personalities to autonomous self-learning systems, where it's not always clear which party may have to be held liable (e.g. think of autonomous cars, that may acquire patterns on how they should operate on streets after they were produced, due to some usage pattern of their [previous?] users).
youtube
AI Moral Status
2017-03-26T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghrtkIaEYufGXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghCvNhEHN-AcngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgiOBS6RkHXMSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgjwY2J2WgKWg3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggUC5VN_TTCq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz6otA0YsK1H_oU8AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyIkqhIIjdH2ymktNx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_WZU3jCe3MYPLU4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxAkrhfdggp5M7Mml14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1Tu2KEOx1r2EjJj54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"}
]