Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a new teacher, I'm increasingly concerned at the number of colleagues I have who are generating their lessons, assessments, and even their entire units using AI. The entire goal of writing good lessons and assessments is asking yourself what you want the students to know, what you want them to master and present to you. All these people asking "write me a worksheet on ____" are totally missing the point. Even if they're checking them for mistakes (which according to some of my students their teachers are definitely putting out stuff with mistakes in them), they're not actually aligning what they're giving kids to their assessments. This is exactly how you get kids saying stuff like "our exam had stuff we had never done in it" or "the homework was useless, it wasn't even on the exam". Your formative assessments (the work in class and homework) should, from the first time you give it, be aligned with whatever your end goal is! It's exacerbating the problem of AI because now we have students using AI to answer questions written by teachers using AI and... wait what are we learning and why? Bleak. Gives me good motivation to make my own stuff, but scares me that so many think it's saving them time and creating the same experience, and don't see how this is a drawback.
youtube AI Moral Status 2025-11-25T23:0… ♥ 620
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyqjsqfuqVEzcWSs2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPQFLqvk0wc_pE3Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1xsc3KYPErxaMBOJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8CmJOUD1ilLkOLYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxnEZq9A_8AvcwIUZh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzo9TpISftgSBk6lJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy5VOFsmhM_o97_oKJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxEfsiPeWqSH7g9Wht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCX80X9CDEhqTQ-PN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzrWfDlRKW3Gxhp7zR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]