Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if a robot demanded rights, woild it be an original thought by the robot? or would the robot only demand rights if it was programmed to demand rights and then it would only be doing so because it is obeying its programming? if a robot was programmed to appear self aware, and able to make all the decisions a human can and talk and think and "appear" conscious, like a human, would it really be "conscious " or would it just be really clever programming that makes it emulate those features? a robot that is built to look like a human might appear human and might talk like a human, but it might just be programmed to talk that way. it might have self preservation. it might react with fear if you attempt to hurt it, but what if it doesn't truly feel fear, it's just programmed to react as if it felt fear, reacting a certain way to a certain stimulus that it was programmed to recognize and then programmed to have that response.. but then again, how much of human behavior is our own free will either? we might only react to certain stimulus in a certain way because we are programmed to do so by our hormones and brain circuitry pathways. are we really that much different?
youtube AI Moral Status 2020-12-15T15:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzqMT6pGcgLAG2eq0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHJ7VUd62CYmTbo9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwpe6rdMtxiluBP1lR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjlnDLi1mQHdLCmJl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxTx9CBxK64iGeZ3Z54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzTolftlrANaC1ZRXd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz2aGP7Dar-lCdZ2E14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzqyxvOBJPhhfVf_T14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzIAUnwHFJrUcVp07l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyCubf3fc-RzYSKn5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]