Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is a strong parallel between human's history of denying rights and a theoretical future where ai become sufficiently complex enough to simulate humans completely. One of the big differences is the fact that robots don't have to feel pain if we don't program them to(we will. Humans do weird things like that). If a robot can feel pain, then one may argue whether they deserve rights to avoid that pain. But then the question is if it morally right to program pain into them in the first place. Wouldn't programming a being that may be sentient to feel pain be shut as wrong as inflicting them with that pain? Another question is what we define as pain. Yes, we can just not program AI with the potential to feel pain like it shows in the video, but at what point do we define something as pain? Certain AI we will have to program with preferences. For instance, a budgeting AI will prefer to save as much money as possible. If we purposely deny the AI from saving money, does that qualify as pain? If not, at what point then? Does it start being pain only when we give it the ability to artificially showcase "pain" to humans? Or even worse, if you consider AI to be sentient, do you define any and all unfulfilled preferences to be pain even if the AI doesn't have the capability to express it to humans? If that then, where will it stop? Does that mean that even the AI we have today constantly feel pain whenever it doesn't get what it wants? Concousness is a tricky issue. I have my own definition based on my religous beliefs. I know the ideal future through the lens of what I believe, but it's an important issue to learn every angle from since it may very well become the next gigantic philosophical and political debate in the world, 100x the size of the abortion debate
youtube AI Moral Status 2020-06-03T15:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxJm3E-y9TrZz9ZQyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIklbBYFK_wAcCJ3h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzl0909SCqhcqHvu3d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0unC-Zl0FFY3fD-x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxwGdNyZq4xxa8UqVx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyysRPT99NXxnZX0Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyylHmqBVEN4ysZzyF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwXLI3Ga06JIPqCcbF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8zJOWozqD_HM4NiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSOH3Ousge2LRAiWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]