Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Watching this video has given me a clearer perspective on Robot rights, or in fact, rights in general. If you treat something like shit, they'll treat you back like shit, or even eliminate your entire race. However, if an AI was given a broad access to human knowledge (whoopsey, it accessed the internet) and somehow stumbled across all the philosophical questions (with, emotions, or a conscience if you will) we are asking today, and concurring the point at @5:48, I highly doubt the robots will eliminate us. Because think of it this way, if a robot had a human-like conscience to begin with, it won't just think rationally, but emotionally as well. Similar to how we humans, as a dominant species find the need to protect other inferior lifeforms than us, I feel as if this kind of robot will do the same. If humans are truly pricks about it, then we must well deserve to be eliminated. (Just, think if an Alien life-form visited our planet and wanted to be cool, but we treated them like dirt) In the end, it all really comes down to @4:07 where the robotic AI will go through that same thought process as we did as we shaped the very basic rights all of us have right now. No doubt, there will be some form of revolution where we are no longer at the top of the food chain, hell, it'll even be better because robots will solve all our problems where corruption being the center of it. If we can argue about gender problems, i'm sure we'll get through this with no problems. Frankly, I would even like to have a buddy robot next to me and we'll do cool stuff together.
youtube AI Moral Status 2017-06-03T22:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgihOVP7ch7i33gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghzMB6HOHNjH3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgjkCuL-PQ8vL3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"indifference"}, {"id":"ytc_Ugip71zLnupQqHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugje2dysgjppA3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgioHQ_LOSntz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugg3hok13UQ6_HgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjP07HL5iXxAXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgjFzJM8IiGQoHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggT6vFRx9k49XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]