Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Extremely disturbing that when asked about what we wants for his kids he says '[that they] will never be smarter than AI' - 45:52. After the Industrial Revolution took away a large portion of humans needing to use their bodies for work (a good thing of course), as Sam said himself in his piece The Merge, humans ‘self-worth is so based on our intelligence’. To want your son to grow up in a world without this source of self-worth is terrifying. Sam also says ’it is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves’. But what of Sam and other AI accelerationists failures of imagination and arrogance to believe that people should be grateful as the AI they made decimates their local economies and communities? This is in addition to the arrogance in intentionally designing super-intelligence that purposefully negates so much of what makes us human, our unique intelligence, and as Sam says gives us our self-worth. Arrogance in the extreme.
youtube 2025-08-01T09:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmvKxIBiv3l5KARsJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzTeyyok9c9hhA5VzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyXczWVzVDYFg134Rt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzsh9BC7jiAo32sAn94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyAKUq6PLrHCsGd52B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwS7gGAr-EnQWAxeZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx5UBnZRHiO2ALPfRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyDjXm1ayue0pFAmQ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzTAW43RVop5KAks_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwu1p81KpwoaOlRuTJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"indifference"} ]