Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a student in higher education, I understandably can not stand on a high horse as I’ll admit I use AI to workshop ideas for topics I’m unfamiliar with. However, I know that generative AI programs are only as good as the training data and they don’t have access to academic journals—just the abstract. This causes multiple problems, but just as examples: 1. AI will spit out the bottom of the barrel information because that’s the information they can access, including misinformation created by reporters about a certain finding. 2. Using the academic journal’s abstract then extrapolating. I had an incident where the AI defined a concept, but further reading into the article they cited, AI was using the definition that was being rejected. The problem with AI is that they write in a incredibly convincing manner that students just take their word. However, the deeper question is what is the purpose of students in higher education at an individual level. Do they want to learn or are they just there to get a degree? And why has degrees become an integral part of succeeding in career even when most graduates wont “use their degree”? There, thus, needs to be a disentanglement between academics and those who feel obligated to be there by social expectations. But, yes, academics will cheat in other ways like p-hacking. Both issues stem from the system people are subjected to. Also, looking at the comments many have mentioned teachers who most likely uploaded a students work to AI to make grading easier. This is horrifying as this perpetuates the slop trains slop. But more-so, the blatant privacy violation of the student. Instructors should and can not upload student work into a private AI company without consent.
youtube 2025-08-01T11:2… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxWfFPGEX_QCxNWYqx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQqV82kBfUscBv6rF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7oF4hGjIXotqtuN54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlgfjuOu5ojKoUhul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxyTzcyHn8C7I4SCsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwLQ4tMi2pcQyn7Ar14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwoL8DdY5BiDbEfsaR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxtag96Oq02FZUFTrx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzkhP0bdNHQijyU-p14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyig6B7o2PpoSra4Jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]