Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Humanity's" greatest mistake is actually "creating" things like* this robot (wrongly denominated) by the name "Sophia" which remain "things" despite a lot of people (such as the interviewer) revering such basically inanimate objects as if they were larger and better than humans idols... This reflects thar some people have absolutely zero idea of what it means to be human, absolutely no idea about themselves therefor and a total materialistic collapse and denigration of what it actually means to be human... Further, even without getting deeper into any discussion of what a human being is, or of what it means to have conscience, intellect, intuition, tor about the distinction of the brain and the mind I am perplexed that the interviewer seems to suggest the robot was/is "more intelligent than humans" when clearly the robot could not answer one of the first questions and quite simple question it was asked, as the robot wrongly responded what her name was and how it was spelled rather than what it was supposed to mean. Also, whoever named the robot should probably read Plato and Aristotle or any other of the classical philosophers to at least get a sense of what being human is. at a basic level.. and should also understand the huge difference that exists between knowledge and wisdom... Their robot is nothing but a super fast computing program that compiles information taken from others (yet not created by itself, the robot). itself the robot represents neither wisdom, nor knowledge , it is just a repository of information taken from others...and of course it is extremely far from being close to a human when "what a human being is" and the value of humanity are considered in the strict sense of the word.
youtube AI Moral Status 2024-08-30T22:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzDu0-t5NyxqjjXXHZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4Az-tumpdPh0HUGN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPElGOnsAv9TYXldB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5rdU_KamsNeVwPY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyl3CXenS7dV63SsRB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOYMJrf5VoiTyWo654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCd_SKHjcEoeUM6gR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5cr_H4dyrfZ9rcH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXIo_0SzcALJhMzXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwWJ0rDIDEp-1WgBuJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]