Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is being trained to identify faults and improve efficiency.
It is obvious (t…
ytc_Ugxl21Swh…
G
„I can't see mich of a difference in ai learning from art to make new and a huma…
ytr_UgwNMVSfT…
G
I dont watch this often. But this one i had to voice an opinion. AI is NOT a goo…
ytc_UgzhBpd5Y…
G
Oh pleeeeease. You are just mad because some tech nerds took away the only thing…
ytr_UgyeS9bBZ…
G
there aren’t any “they” & there is not any “they are alive.” We are alive and we…
ytc_UgxLsVOd4…
G
AI has access to all information, can combine all knowledge into new insights. T…
ytc_UgyTThOyF…
G
The therapist tells them t9 look at funny videos 😮 WTF!!!?! You can’t erase fro…
ytc_UgyQir8vD…
G
the only thing AI art really has going for it is that it is free, basically cost…
ytc_UgzyNQ8Pl…
Comment
The lack of human rights creates dysfunctional societies, so giving rights is not intended only to benefit the individual but also to benefit the group. The lack of robot rights currently doesn't disturb society because people can claim property damage. But if robots become capable of waging war against humans (with or without any sense of self-defense, like pain and emotion), that would change the game. And if robots become superhuman, then this discussion is as useful as cats deciding whether humans should have rights - we would be the cats. There's one problem with this discussion: why does everyone assume that future robots would be friendly to one another, that they would become an organized collective with a single objective? Humans don't, so why should they? As free thinkers and free agents, they are likely to diverge in their (superhuman) ideals too.
youtube
AI Moral Status
2017-02-23T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggD3ftR4rVvoHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjHDsa6X9WSA3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgjHNgX2PLTXdXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg_-pM4pNajdXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgiRwHOTbYP9qHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghDAQ9vzeYcbngCoAEC","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghI9gxfJBEJK3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgjmvxyKpyiwLHgCoAEC","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ughf3kv_0SxIWXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjldUSsX4ZuyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]