Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
40:00 An example of this sucralose problem is actually a lot of body horror media. Things like Soma, it's goal is to keep humanity alive. Unfortunately, we're pretty incapable of expressing what humanity IS and as such it results in the WAU doing some genuinely horrific things. Along the same vein are the general principles (and problems) of Isaac Asimovs rules for robotics. "Do not harm humans." Okay well define harm? ANY type of harm? Then the robot can't perform surgery, or do cpr, or stitches, or even clean a wound. Also, if it decides to prevent harm from coming to humans, what if it comes to the conclusion that Humans are the greatest threat to Humans and thus kills all Humans to stop Humans hurting Humans? Is this logical? Yes in some ways. This is the sucralose problem. Our idea of "help us" is vague, shifting, and personal. Help me is give me Bezos' money. Help Bezos is give him my money. How does it help both of us? Like it's a mess, and I do appreciate this guy and his understanding of the philosophy and his appreciation for pedants (hi yes I'm the one who immediately flinched before you told the philosophers and pedants to calm down) because in some ways, pedantry is how you NEED to look at robotics. There is no rule a dog can't play basketball is EXACTLY the logic that robots could use to kill us. We can't harm humans, but what if we harmed all the water on the planet? It didn't harm US. Like you HAVE to be a pedant when discussing robotics in a AGI context.
youtube AI Moral Status 2025-10-31T11:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxh-xFMDzRtijV_RWN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyD5wue98PbE5w6vQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwwr9Hz-Xm0ju_Z87J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy_RYe1DkyFLHZpett4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZGfs_ksrhC3MEb9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwPd8HSWtVjiRbdrup4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0Ea5lnMKyM_Ze4BV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwq9ZICKsOqZQpO8oR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgyqkWMwN_-RNd09MSl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwK4s6z8uBakQbWtSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]