Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with your variant of the gym analogy. You say that using the AI is not like using a robot to lift the weight for you but rather like using steroids. My issue with that framing is that using steroids to get faster progress still requires effort, and you as the person can still lift the weights, even without the steroids. I think that the robot lifting the weight for you is the better metaphor, and I think the 'correct' use of AI would be like using the weight-lifting robot as a spotter, letting you push yourself (or your ideas) further than you could alone. Controlling a robot requires its own host of skills and mindset, just like using AI. When calculators were introduced, we traded the ability to do math for the ability to use a calculator, which was generally beneficial. Now that AI can approximate thinking and solving problems, we are trading away the ability to think to gain easy performance. I think this is generally a bad trade, and making a different kind of trade requires self control and discipline, which people are famously good at. Since you pointed out that Gehl has an anti-AI bias (which I liked that you did), I would have also liked for you to point out that the Alpha school costs a minimum of $40,000 a year to enroll in and thus the students they are from wealthy families who are invested in their education. These students would likely perform above average regardless of the environment they were placed in. Anyway, I really enjoyed the video :P
youtube 2025-07-11T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzk8idTi9ifIIsNp8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxI7tOYuAzaZEPlggF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbSSOGMTiiAOEUo5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZX6qdrC0gGHUXeSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzmesOPB2r3pal_0814AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzNohOfDzZ9dDZ4p014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaoUkl4KDBILl33dR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwJi8rclHvdNMwx35p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5PKfFbJgHFmHjZ_V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx3TzTnMSIfr5kbl214AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]