Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Khan's example about analyzing the Great Gatsby is one of the dumbest, most dangerous things I've ever seen an educator suggest. Is that something a real-life reading tutor would do in a million years? Pretend to be the main character of the book so that they can pedantically explain the book's symbolism? What reading skill, what critical thinking skill, is that conversation supposed to be encouraging? Are we supposed to be teaching students that the protagonist of a novel is the best authority on the symbolism of that novel? Khan is patting this student on the back for reading Spark Notes in a slightly more tech savvy, nerdy way. How can people watch this man and think he has any clue what's good for students in an English Literature class? This is lunacy!! We've drank the Silicon Valley kool-aid, and there's only one direction this is going... the absolute collapse of critical thinking, which will lead to endemic depression and addiction among adults. The trend is already here. I promise you, students today are NOT going to look back on this time and think "Wow, I'm so glad that an AI did 50% of my school work for me, I'm so much better off for it!" You might argue that schools will raise standards to adjust for the abilities of AI. If you believe that, you must not work in a public school. Districts are fighting tooth and nail just to show basic proficiency on end-of-year exams. We're in a race to the bottom, and AI is anything but a golden bullet.
youtube 2023-05-08T19:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwL-ZL7KMiUb8_Q_Dd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2VzcB1vxcXaLW-3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzycylR9dK3b5DpR4t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxq7qRZ0nVShvCLVtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzLfyQDn19biVlbLO54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxPGMC3jPBGqXMc9KN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzEIY0VgBZa6dNHxdJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzKA4dyz5zEj7tQHqN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwq0P1PziuLB034ld54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBjweAEhyGhNC2-Dp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]