Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok I'm confused why I never see people bringing this opinion and it might mean m…
ytc_UgxQ7h52I…
G
The teachers today don't care most of them so A.I might do a better job…
ytc_UgyhzVNfM…
G
>Would you rather have an army of robots to defend your country at the front …
rdc_dwump8w
G
The ai art in this video did take significant time and knowledge of art to creat…
ytr_Ugzc31hD3…
G
Oh, it cant do just yet but it will.
And humanity is doomed.
Ai will destroy a…
ytc_UgyZdHFj3…
G
The Holy Grail of AI research is the Artificial General AI or AGI. This is not n…
ytc_UgzfLg7Y2…
G
What makes David Attenborough’s narration so compelling in nature documentaries …
rdc_k9kc3sg
G
Can AI care for people in hospital? No. Will AI farm crops & feed people? I doub…
ytc_UgwU6IiHI…
Comment
Suppose you ask a researcher to quote the NYT article and the researcher quotes near-verbatim paragraphs?
Would we then question whether it's fair to allow researchers to read and learn from NYT articles? I think we'd all agree that it depends on what the researcher does with that knowledge.
If the researcher publishes it as his own work, that's bad. If the researcher uses it to develop an understanding and publishes his own work, that is good.
Remember that AI does not publish anything on its own. It is merely a tool, used by a person. That person chooses how it's used, for good or for bad. The fact that something can be used for bad purposes, in addition to good purposes, (like many things) is not a reason to ban it.
youtube
AI Responsibility
2026-04-11T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwkW8ldZfltITG8uD54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6ZfhmA9OVM4AZ88x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqKiE1VtnpjQ1Y02p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgylbZR-4kpiRgNYEtV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXZwOjiOM4DPvrPPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4cPnmC3DwpGDiM014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugybbi7YNazQdP5Stip4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys1EKgK7MFwOFKzn94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwU1U5N2Jjuz54XzWx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyI7QXdNi-950CDw2V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]