Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've noticed a decline in my student's ability to think and answer in depth. My grade 11 DP Design Technology class was recently asked to defend their finished upcoming research plans to another teacher posing as the Client who commissioned the design of the product. (Why they had spent so much time doing field observations, why he should continue to fund their research, why they needed more and different research activities, etc.) Most of the answers were full of buzz words, sounded good, but contained mostly fluff and no actual reasoning, defence, or evidence. Very low level generative AI answers. Only they weren't told about the activity beforehand and did not have their computers so the answers were purely their own. We got answers similar to "We discovered pain points (with no clarification of what they were or why they were important)" "Gave us an idea of the needs" "We could see how they act", etc. One student almost got there referring to specific discoveries noticed that supported the project but unable to highlight how that required further research. Basically they really struggled to move beyond the book answers generalization of "we research because" but with flowery AI buzz words, and apply it directly to the project and the data they had already collected. Completely derailed my planned lesson with this teachable moment. They really struggled to see why their answers wouldn't get the many points on an exam or would not inspire confidence and continued funding from a client. And this is after we have taught depth and using supporting evidence in answers. But on the spot their go to was the nothing answers they are used to getting from generative AI.
youtube Viral AI Reaction 2025-11-11T02:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxQPiOFkWbhAt8VC-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaOZOmo_OnM67eIwR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_Ugyf5Um60r8VKvvTzl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-29Nh5BKhmWFthO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfTpCAoNMObgg79QN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBRdewOo44XOGXBf54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXTQJD4Drx-jRFc2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMc7yxg7tOnJlrRIJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEvUNAObcA9zmhmPR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzt4iZppY6To0mUBXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]