Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if you want a robot to be conscious just implant a human brain into it very effe…
ytc_UgxY4Ku-3…
G
It’s wild that you took that away from this video. Does N*idia, Virginia, or BMS…
ytr_UgzXQ6BWM…
G
It was not banned, just regulated. Everyone's cool with AI until some serious da…
ytr_Ugyu6m_9T…
G
The West is strange. First they push climate change agenda. Now they are going t…
ytc_Ugy_3QAfe…
G
AI is not intelligence, just a bunch of subroutines/ copyright theft algorithms …
ytc_UgxSDLhhY…
G
Well, its true that AI is the problem as it takes away jobs from the people and …
ytc_UgwweSo99…
G
It is possible that the AI traced his work, the models are trained on art from a…
ytc_UgxV_T7XU…
G
I believe that self driving cars, or cars in general, might never be truly safe …
ytc_Ugy-hqylf…
Comment
I agree with Natasha Berg’s balanced view — instead of banning it, educators should teach responsible and purposeful use.
For example, using ChatGPT to brainstorm essay ideas, outline arguments, or check first drafts can enhance learning—if students still dig into underlying concepts themselves. But as studies suggest, overreliance may lead to shallow comprehension if not guided properly
I’d love to see structured classroom approaches: professors assign prompts like “use ChatGPT to generate 3 supporting points, then critique them in class,” or “compare AI-generated solution with your own reasoning.” This would turn a tool into a thinking partner rather than a shortcut.
What do others think — should AI assignments include a reflection component on how students used and learned from the AI, not just the final product?
youtube
2025-07-06T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyVxX2WYKeJW-aPrJV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiTe6cmK9fB15Vjj14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz4FHtvpln7cRlrF8R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_yEUNkQ8NRRnIOWt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgySEkYMgSsp9L4HA2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRmpHMtr_ocn92srh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwbH0p7egMfhwHYvkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxezRdg13xjC9_H95h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmXmkvccU5nblGukB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxYUamX1R8w-LSB-zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]