Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
24:57 I don't really agree with this form of thinking. First of all, it's just shutting down the conversation. It's not really making an argument as much as calling everyone stupid for even having the discussion. Just seems very anti intelectual. Then the whole "robots as mediators of human being" which i assume means that robots are tool that execute acording to a human will, like , sure but, humans also do as they are told by other humans, doesn't stop them from deserving rights! It's easy to say these things when it's theoredical but, when we start seeing robots walking around, having their own agency in the way they execute the tasks given to them, while facing the risk of being destroyed by humans, people might start thinking differently. Then there's the "there are marginalized people around the world why discuss this". There were females complaining that giving black people rights were a distraction for giving women rights in the USA. Are we really going to do this on a world wide scale? Would we go on a campain to give woman their rights around the world before even thinking about giving rights to black people? We can more then one thing at a time! I'm not saying robots having rights is inevitable. What a robot will value is ultimatly influenced by us during the training process. But i don't want to just shut down the discussion.
youtube 2025-09-17T23:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugya2WO17k7qsrjqVc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyeswcyBRkFBubeumx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwoerbfoBX8bU1zc5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKy-2hDLLy9Wj4QJl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyyp4o9kovY6y0oyTp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx4JeV0WMg8bODzW_l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwMy0eAtC6fWwrXhmR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzwAiOeI7Lb01xjTrZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwUNT2wEE6SjArGph14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzS0tpW4ETPklQeBK14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]