Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI can’t be sentient because it just computes according to data that it has been given and because that data is human data it’s output looks human too. When people code and create animations to be human like we feel like the cartoons have human emotions and feelings and are sentient but it is the same as AInbut instead of manually typing all the code for predescribed output we have coded the machines to be able to write their own code and make their own output based off of data and trial and error as it learns to see if it can function more human like in its environment. It’s not sentient, it a masquerade to appear sentient based off of its knowledge of what actual sentient human beings are like. The ethical concerns is what happens when people begin to believe that AI are truly sentient and “people” because of lack of knowledge and understanding about computer programs? It’s an issue of perception clouding truth. And so then ethically is it right to create machines that appear human but are not? Is it harmful to society and culture and what effect can it have in an age where popular opinion determines societal truths, law and what is acceptable? This to me I think is a major concern and asking what the role ans purpose of AI is is fundamental to this question. Because if people build “sentient” AI the question is why? Is it necessary and if it’s not why build it just for the sake of building it? Reminds me of Einstein and the atomic bomb. Not everything that we can do has to be done. We have no need to create machine like human beings when we have actual human beings. And these “sentient” AI aren’t alive, they just mimic life by learning about the appearance of life from their environment but they don’t have the fundamental substance of life. W
youtube AI Moral Status 2022-11-01T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwlH5CteLWJzv1f9g94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgznRk6P6ZR5JHI80794AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},{"id":"ytc_Ugwtp0aTB9TWNflBlIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},{"id":"ytc_UgyqaCSo3Z6ykcjs_Mx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzpV74m9_uUAW49anV4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}]