Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea that using ChatGPT to explain a complex concept means you “didn’t really learn it” doesn’t make sense to me. You said that reading something and finally understanding it is not the same as learning it—but if that’s true, then did I also “not learn” math when I studied from textbooks, watched online lectures, and asked questions in forums? I took Calculus II and III entirely online, and yes—AI helped me. Not by solving problems for me, but by explaining why certain troubleshooting steps were necessary when I was missing a piece of understanding. How is that any different from Googling an explanation, reading a textbook again, or asking a tutor? Are we saying people who use tutors “become less intelligent” because someone guided them through the thought process? No one would argue that. The study in the video focuses on essays, but writing essays has never been a strong indicator of overall intelligence—especially for people who don’t care about essays in the first place. If someone already has no interest in writing, their performance in that task isn’t a fair measurement of their actual cognitive effort or ability. That variable alone changes the entire interpretation of the study. I agree that some people may use AI to shortcut tasks that would normally require deeper thinking, under very specific circumstances. But that doesn’t mean AI inherently makes people less intelligent. It just means a small group of people might misuse it—just like some people misuse calculators, search engines, or even notes. Apps like Elevate, textbooks, tutors, videos, and now AI all exist to help us understand things more clearly. That’s not losing intelligence—that’s using tools to learn better.
youtube 2025-11-21T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzo6aXa_FBDxXMTUrB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6KoCW_wOO5lcRSVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEzz6i1SsFkpqgFHh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypmvANZ8cRs3eijDp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKeUPU0_ug2-GVO5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXZDCq7tzUdYE1asN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwSB1hsLKLZ1Wlw-7B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzpqNNZziwo_twWwgR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxcNZOVoCgEpT1XFHp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVOjTJ0v8VTwmFa3R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]