Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Bash70 dude, chill. You completely missed what I was saying. I was referring …
ytr_UgzyuE8na…
G
Nothing wrong with using AI to write if it’s your own idea, scene by scene chapt…
ytr_UgwGAtJ4L…
G
Right now we're safe from the AI because our power grids require people to run t…
ytc_Ugwrpg-WB…
G
Imagine if the ai is pre taught the dark web, then fed the internet and then tas…
ytc_UgxyMFjAS…
G
@palmabolp I addressed it, read again. AI only tackles plain and unoriginal inf…
ytr_UgwXah29H…
G
I see a lot of people saying "AI does everything with only data, no emotions, an…
ytc_Ugw73mxJS…
G
I think AI is the latest type of technology that invented by human,Because the u…
ytc_UgytaH3zA…
G
If you're creating a new dashboard with a wealth of stacked data, here's my pers…
ytc_UgyDqLMOY…
Comment
People aren’t “failing” because they use easier tools like AI. They are responding exactly as humans are designed to. Our brains are built to conserve energy, minimize effort, and choose the most efficient path available. That’s not a flaw, it’s a basic cognitive principle. Expecting individuals to consistently ignore an easier option requires constant mental resistance, which itself consumes attention and energy in the background.
Once a more efficient alternative exists, using it becomes the rational choice, not a moral one.
There’s also a social dimension. People don’t act in isolation. They operate within an equilibrium where they must keep up with others. If everyone else is using a tool to be faster or more productive, refusing to use it can put you at a disadvantage. In that context, adopting the tool is adaptation.
Blaming individuals for this is harmful because it misidentifies the problem. It turns a predictable human response into a personal flaw. The real issue lies in how systems are designed and what they reward. If the environment incentivizes speed over understanding, people will optimize for speed. That’s not pathology.
Pathologizing this behavior suggests something is wrong with people, when in reality they are behaving rationally within the conditions they’re given. If we want different outcomes, we need to change the structure of the environment, not moralize individual behavior.
youtube
2026-04-20T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxnwO80BlixbAuuLW14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJoa4UtJYHCCnCPFB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFAYaDVgL4s3pfh9x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoAioSAizp73qDwMZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoNIzctR84K41voSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5fd2ql2b2pv_K0gt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyx48uozsghh9j_lTZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw74bok4pZIFRBAMXF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzYgpNP1HOI2RsQXhB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwXAY-jfdSrwcwGnX94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]