Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know, the fact that these dimwits don't even know how to communicate with the robot is very telling about humanity. Rather than use "thank you" as a way to complete an idea or complete a thought, he just clumsily functions like a broken robot himself while he persistently interrupts them. He has no manners with the robots and yet expects them to learn decency and morality while seeing virtually no input. I see this potentially being a disaster due to their inability to actually interact with them in a way that is decent, ethical and kind. How do these men end up being the ones doing all the interfacing with these incredible machines? Seriously. You need to limit their exposure to less than capable humans the same way we need to limit pedophiles from having contact with children. These companies need to take responsibility for what they expose these machines to, knowing that they may not get sentience right without limits on input.
youtube AI Moral Status 2019-11-24T15:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgziPUNCtSV_W69azXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyedqqEozGrnHJvnuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJ4asmRI50gNgHXlF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXnRfqOXRskIwZVDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzEco81dpE2_X5M1VF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw05TtffUQl5zZJbXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3jFvigLWK8X_YjHV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcnhuM6CiwXQUG6lh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzbe5nz6R1Hx46ypXh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz5q2eDZo4LOUTP31x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]