Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I lost all the thousand ton of respect that I had for Sal Khan after listening to this talk. First, he's going to "give" "everyone" in the "world" -- you got the point -- he's THAT filled with a strange brown man's burden filled by Sillicon Valley. Khan Academy has been a blessing for those who have had access to it. But look at the attempts to give laptops to kids around the world. Look at the increasing digital divide. And Khan should know about the world beyond California. He may be right about AI's ability to do the math and even do a thousand things with language. But he's looking at the world through his company's lens alone: he's lost perspective (or perspectives). The "super tutor" is fine -- but wouldn't it need to first become capable of "factually correct" answers on simple issues, not to mention leaving it alone to guide children?! He seems to believe AI is going to tackle complexities of history and philosophy, and he also seems to believe it can help students with current events. All that while, he completely ignores that these tools are only good with the most motivated students. AI tools are extremely helpful in some cases, hit or miss, but they are also based on the fundamental flaw: they're agnostic to facts, contexts, cultures, and ethics BY DESIGN, unless certain behaviors are subtracted from them, and that doesn't go very far. I wouldn't be so bothered by this presentation if it wasn't done like a promo, instead of an educator making a critical and nuanced assessment -- and we know why: Khan has money to make from this marketing. I've lost my trust of him entirely because of this.
youtube 2023-05-04T02:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxjzSr2PtY0ivhyjFZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx0w1ajuKf-8lIo_eR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxGJSW089seFrUgDyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzPMJegM8tFCYH7U0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWw_W1RkXxBAlBfzx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxan5THXbKqJKsLATx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuZht2PcB_3qYUmzN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwO9iCioJmJYFyiTAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbqzO9Syy0lEhSzK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxljXRZWIwf9aWw8cB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]