Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great video! I just think theres 1 flaw with the idea that "predicting something humans created often requires being smarter then the humans who created it" (at 47:05) - its more apropriate to say 'more informed'- it likely lacks reasoning but have much more data including statistically wich leads to so often guessing right. Even our experts have only so much data on their own field much less on other subjects... A complete newbie could win against a champion at poker by knowing more (like knowing the opponents cards)... also knowledge is specific- a jaguar could very well predict what the next turn a fleeing human may make, not by being smarter then a human but having more experience(data) on chases. Its very easy to blur knowledge/information with inteligence but theyre deep down different things. We collectively relate both because IRL they often go hand in hand - and the very act of studying/learning helps buildup ones faculties, more varied neural pathways leading to better inteligence- but therye still separate things. The smartest person to ever be born in front of something they never saw would make stupid decisions, while someone really defective in their inteligence could have memorized everything about that subject and also fail... Llms lack self awareness, they cant guess right something complex in one moment in the very next fail to realize a very stupid mistake even with researchers drawing the explanation to then. Hence why a good neural algorithm may predict something very complex like weather patterns even if it cant really get aerodynamics or why it behaves as it does- massive data.
youtube AI Moral Status 2025-10-30T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx1_ez-0vl8tEvhPGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzM4bqngjE5_ib5sKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxcTb6i8AUGg19T2n54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxPfOmj4m_Aube5q4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxE54jNX8p3yYjG0W54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyi_QHZ-dhPQu0-UFB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyGk8_0HvVBwUZdVCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyQLyHJl3d48kzDxI14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwe5HXo6jXaynqJ0ZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxiMiO945P8eZMsdu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]