Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am getting my bachelor's in CS right now, but I've been working as a dev for 5 years now, and man I'm so glad I learned to code before ChatGPT, because otherwise I would've turned out just like this guy. And for how people with a complete lack of knowledge of the most basic concepts are able to graduate, at least in my school, the teachers are struggling with how to structure the exams. My school really prides itself on learning the concepts by applying them (so example, in order to teach basics of programming languages, we had 2 courses in Java, with 2 project courses where we had to build a project as a group). And so the exams were also following that structure, meaning in order to prove that you could code, you just had to code. Enter ChatGPT. Some profs tried to forbid it (but that isn't 100% policable), others didn't forbid it, but then they ended up compensating by making the exam have so many questions, you actually couldn't solve all of it without using ChatGPT. This semester, we had an exam where you had to read a lot of code and make judgements over what it did. Using AI was allowed, except all the code was printed out, so we couldn't just copy-paste it into ChatGPT. Except I heard from a few of my peers that they just turned off their brain and typed everything into ChatGPT - and I mean, I can't blame them. I felt confident enough that I could solve it by myself because I'd been working with the langauge for years already, but someone that was only introduced to it this semester? I believe that in the long term, we'll se a return to writing code on paper, because that's the only way to make sure no one can use AI as a clutch, and you can test whether someone actually knows how to write code. And I'm not sure putting all the blame on the students is the right thing either. There is so much pressure to perform, while AI makes it so entirely effortless.
youtube AI Jobs 2026-01-26T23:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwpQxZjy3OPpW6dEc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgxcPCqhhlHAm_CV4Xd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxT75Ge6L-gel8cTWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwIj40eHoYLW-gp4jd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx5NCGoCd8fsdhJ_594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzWYZJi6fTlzGe1mwR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxjG2EK8oteDDq2Xvx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwj6H8YY3CgklNKuUp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzkfnAl-0RYDFFCa254AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugwj29R9mkvu-EsZZdd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}]