Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Does conscious AI deserve rights - yes - will we give them to it - maybe - why would we give them - to make us feel better. I miss Big Think asking big questions... For me the more interesting question to address would be what kind of rights and under what circumstances would we be willing to give to the AI. This is surprisingly connected to the question of animals and plants and their rights - we do need to eat and if all carnivores would stop eating meat we would all get into a big mess pretty soon and same goes with one species procreating at an unsustainable pace since it is somewhat interconnected. The topics were touched on a bit by the speakers but I would like that to be expanded further. Currently we are trying to give "basic human rights" to all humans while most humans don't seem to tick the responsibility marks in the slightest and some rights to animals but none to plants so it'd be interesting to hear how the line should be drawn in regard to AI since currently other lines are drawn arbitrarily like "all humans" or "all animals (that we don't need to test on or are not in any other way needed like for zoos)". This affects the AI in regard to potential procreation speed (same as too many humans or deer due to natural imbalance - lack of predators/self control/rules) and what the restricting rules for loosing the rights should be. The next question that arises from such questions immediately becomes if we evolve AI but are limiting the genetic testing on animals due to "if they develop consciousness that becomes torture as we study them" instead of just stating the obvious "we don't want competition" how the evolution of AI fits into such narrative and if we create AI that is conscious is it our "moral obligation" to do the stupid thing and destroy any AI that achieves consciousness and should we give conscious creatures in general (including AI) some basic rights of evolving even before we reach such evolution stage in AI?
youtube AI Moral Status 2020-08-17T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugw_TfPRrOOOhJdA3xh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgznS0BoSSoOvI2YrRZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAN9YppoZmQ26k_AF4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzDstFwtt3R46fASG54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMBoyOuZxuQJeHPyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxL67iRnZmsn98cJg14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx7q_oEmI0YQ0VA2gh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwvrPQUl49mGlkzNBJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZ1Q458X4GBQOHQYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwcHhN1oBEmIXh4vz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}]