Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will make humans better like the first automobile, it will not replace thinki…
ytc_UgxpSUGcI…
G
Artificial intelligence in itself isn't dangerous. Artificial intelligence is a …
ytc_Ugy2RFeCe…
G
That AI interview was a mess! 😂 It's a reminder that brands really need to up th…
ytc_UgzEp_z-2…
G
I used to like the podcast and the clips but I no longer like its click baity ti…
ytc_UgwhiZjqy…
G
I work at a tech firm that specializes in AI. Let me be the first to tell you, …
ytc_Ugxlp3hJi…
G
AI is a life saver. Better than any advisor and friend I have ever had.…
ytc_Ugx3-fIU5…
G
I was like this until I understood better how current AI LLMs work (somewhat, st…
rdc_jmggrbn
G
I've seen a video on youtube about how to create a whole coloring book using can…
ytc_Ugyh1_LX_…
Comment
Scaling the “2-Hour Learning” AI model sounds promising—but it comes with serious risks. It may undermine the role of teachers, reduce social interaction, and create screen-dependent, data-tracked learners. Overreliance on algorithms could widen inequalities, exclude neurodiverse students, and sacrifice creativity, culture, and critical thinking for efficiency. Education is more than optimized content—it’s emotion, conflict, and human connection. Let’s not lose sight of that.
*This comment was created using AI 😂
youtube
AI Governance
2025-04-20T15:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzP0kxMHX3qiUTUe3B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnBU1WlSIunmU2yHt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-wKDn8NetfuHvOa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzNJg25Iqma_-4WMqN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxn8OlhI86c1ke-JAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLu7BXFyvL_M2_l4Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKYbELMn-ByiNKrd94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOt3XB6WA7rEYQoNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyvlD3itu_inltHKr14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyAV8hyI7iwB5supdN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]