Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only acceptable use for AI is the stuff people like DougDoug and Neuro sama …
ytc_UgzBs1A34…
G
AI is only good for porn and work, it can't fabricate a human understanding of t…
ytc_Ugz4UsLX1…
G
The rick will get expert teacher and rhe poor will get cheap ai public school. B…
ytc_UgyuCsrea…
G
I get that the interaction can come off as a bit much sometimes! The dialogue be…
ytr_UgwM_9iCd…
G
It's not the AI. It's the people putting a freaking robot in charge. And finally…
ytc_UgyPH8WRz…
G
Almost every advert between the insurance ones on YT are for AI worker replaceme…
ytc_UgzZB46Uj…
G
This is a dumb take. LOL!!
First of all, AI and robot are not the same thing. I…
ytc_UgyQdM-CF…
G
Another thing to keep in mind is that there is a difference between *bias* and *…
ytc_UgwaLC2iC…
Comment
When one of the pioneers of AI — someone who helped build the technology itself — publicly warns about its risks, we should listen.
AI has extraordinary potential to advance medicine, education, and human productivity. But without strong ethical guardrails, transparency, and accountability, it can just as easily amplify misinformation, erode privacy, and concentrate power in dangerous ways.
This isn’t about slowing innovation. It’s about setting a high bar. We have an obligation to demand that governments and companies develop and deploy AI responsibly — in ways that advance humanity rather than harm it.
If we don’t insist on ethical standards now, the consequences won’t just be destructive to others. They’ll ultimately be self-destructive to the very society AI is meant to serve.
Right now, AI governance is fragmented. Some standards are mandatory (laws like the EU AI Act). Many are voluntary frameworks. Enforcement varies widely by country.
This is why public pressure matters. Ethical AI won’t happen automatically — it requires coordinated regulation, technical standards, corporate responsibility, and informed citizens who demand accountability.
youtube
AI Responsibility
2026-02-24T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxzsr4GVmTucVz6rbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK78oatKYQLDijCMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1jMMzXw8Xx8kRoiJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLjUyYK-hKXnJlIbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzlx__mzd5bTF1rH_B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFdGChB0p6hnkswV94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBWEbP2c1fH90efVB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw7i6dBWBYLJj4bvUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxxp2-6fYiEnX-jQvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz71qFAXQtTqxvTc2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]