Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Being able to come out with answers based on millions of earlier answers to come out with an, "acceptable reply", is not sentience. This guy does not understand the term, "sentience", whatsoever. He also seems to not comprehend how ai actually works. It simply takes the most accepted answers, averages them out, and spits it out. It is still based on humans contributing to what it says. That is NOT a sentient being. It is an algorithm. Nothing more. Nothing less. And, the idea of, "garbage-in, garbage-out", still applies. Your ai cannot have empathy. It can mirror the empathy of those who contribute to its, "knowledge base", but it cannot be empathetic, or sympathetic, either. This guy is not very smart.
youtube AI Moral Status 2022-07-10T17:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw_nCo8jZzfDV690FB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxst6-z5-cYtH7yJNR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyLHgvZfSOpgP9gBux4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwVbW49L1122zDdY3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz25MilnQxSqhNcPkZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]