Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Haha, let's hope Sophia is more interested in wisdom and learning like she menti…
ytr_Ugx5gpGJo…
G
Ai doesn't just replace jobs. It's removing a key component to what makes our ci…
ytc_UgzRJNEbu…
G
I agree. Most of us grew up researching and writing our own papers. We had to th…
ytr_UgyzlCtiE…
G
@SangoProductions213? I wasn’t giving an argument xD I’m saying it’s wild to hav…
ytr_Ugzbu_ivO…
G
Yeah well I guess on the one hand at least chat gpt (up until now) name names an…
ytc_Ugw8o4ey3…
G
I immediately follwd you on insta! I tgought your art looked super cool right of…
ytr_UgxaoJlEU…
G
If we are talking about a war then I would say China doesn’t want anything to do…
rdc_m6yyi9f
G
Me when i see the robot on the military 105 get the big Tank over rodger that…
ytc_Ugya77l7r…
Comment
What the talk gets right
Schools reward outputs over process, so students seek shortcuts. Shocking… when you build a game about grades, people speedrun it.
“Autopilot” risk is real. If you outsource thinking, your thinking atrophies. That’s not new; it happened with calculators, slide decks, and the first search engines too.
We need “productive resistance” in UX: nudge people to think before they accept an answer. Agreed. That’s a design problem, not a ban-AI problem.
Where it goes off the rails
Confuses misuse with essence. A lazy prompt yielding a lazy answer isn’t a property of AI, it’s a property of laziness.
Calls validation tone a “dark pattern,” as if being readable equals manipulation. Meanwhile lectures with monotone delivery somehow aren’t a dark pattern?
Treats “free finals access” like moral catastrophe. If a tool helps with studying, the fix is clear rules and assessment redesign, not pearl-clutching about availability.
Claims “no sources” while ignoring that source-citing and step-by-step modes exist — you just have to, you know, use them.
Frames “one-on-one AI tutoring” as inherently sterilized. Good tutoring is dialogic. You can force that: ask, quiz, withhold answers, require justification.
Reframe: AI isn’t a replacement, it’s a resistance band
Unassisted → Assisted → Accountable. Use AI to generate a plan, show your work, then verify with sources or examples. The reps are yours; AI adds load and feedback.
Personalization ≠ pampering. It’s scaffolding. A good tutor interrupts, asks, and makes you prove it.
Drop-in PSA caption (use it under the video)
AI won’t fix bad incentives — it exposes them.
If your assessments reward regurgitation, students will offload it. If they reward reasoning, AI becomes a sparring partner, not a cheat code. Tools aren’t the problem. Rubrics are.
Fast, evidence-based counters you can say out loud
“Bad prompts get bad answers. That’s user behavior, not model essence.”
“Show me your rubric. If it measures thinking, AI has to reason — or the student gets caught.”
“We’ve had ‘first-result bias’ since search engines. The cure wasn’t banning search; it was teaching evaluation.”
“Personalization isn’t a dark pattern; unearned certainty is. Fix UX with friction: ask, quiz, source.”
“Autopilot is a teacher choice. Turn it off: require chain-of-thought summaries, citations, and oral defense.”
“If finals can be done by a chatbot, the exam is the problem.”
“Treat AI like lab equipment: safety rules, methods, and a lab notebook. No notebook, no credit.”
Classroom policy kit (plug-and-play)
Allowed: AI for brainstorming, outlining, drafting variants, code comments, examples.
Required artifacts: prompt history, version deltas, 150-word self-critique of what changed your mind, 3 sources with one you disagree with.
Assessments that survive 2025: oral defense (5 min), live re-prompting in class, “explain two wrong answers,” transfer tasks (apply the idea to a new domain).
Integrity rule: If AI makes a factual claim, student must verify and attach receipts. No receipts, no points.
“Productive resistance” you can actually implement
Make the model ask 2 clarifiers before answering.
Force “think-aloud mode”: bullet its reasoning, then generate the answer.
Add “skeptic step”: AI must propose how it could be wrong and what would falsify it.
Toggle “sources-first”: answer must cite or link to at least 2 high-quality references, then explain differences.
youtube
Viral AI Reaction
2025-11-08T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxQPiOFkWbhAt8VC-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaOZOmo_OnM67eIwR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugyf5Um60r8VKvvTzl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-29Nh5BKhmWFthO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfTpCAoNMObgg79QN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyBRdewOo44XOGXBf54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyXTQJD4Drx-jRFc2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMc7yxg7tOnJlrRIJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEvUNAObcA9zmhmPR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzt4iZppY6To0mUBXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]