Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Dear Big Think and viewers, My name is Mr W and I am a primary (elementary) teacher. The children at our school have recently spent time discussing, debating and writing about the rights and wrongs of A.I. having rights. We would like to add a thought from one of our students to the comments section. As such, we gratefully ask for your understanding and respect if you read their written piece. Also we (they) would like to say that they welcome feedback based on their opinions and ideas. Yours thankfully, Mr W Dear Big Think, How could anybody possibly think that AI should have the same rights as humans? I strongly believe that AI should not have the same rights as humans because AI does not have consciousness. Therefore, AI cannot feel emotions and pain like we do and they have never experienced what humans have experienced. For example, would toasters who don't have feelings mind being insulted? And would toasters mind being dismantled if they had no fear of death? And if AI gets rights, then they cannot follow the three laws of robotics because then AI can do whatever they want and they will not perform tasks for us anymore. Also, if AI gets the right to work, then AI will without a doubt, do a terrible job. For example, when high school students from Kealing high school got 1600 flavours of ice cream and asked AI to make a new flavour out of the 1600 flavours, AI came up with gross, disgusting and probably really unhealthy flavours like Strawberry cream disease and pumpkin trash break. Finally, AI has never experienced what human rights are so won't AI get confused with what rights are? Yours kindly, Robin Infinite
youtube AI Moral Status 2022-12-08T06:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxleKiIuvJR13Xun6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzTzM-HbQVMVfhGltt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwkWhahdErdiYSp0lJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzH3YJBTR9d8tyYRHF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw0_WwguSb2vNOIWlF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyflaKGSrnmLIvuM5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyupU1PadO8RLXEVV14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyyr_lBzRC6shajfr14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwvfbKoELHfjBMMgIB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz04_Uv9jI1vSTJ0V14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]