Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is why Tesla pails in comparison to Ford and GM for self-driving. Personall…
ytc_UgwhTeWdc…
G
It's not snobbish to dislike ai art and people who claim they're "AI artists," y…
ytr_UgyKqzwW8…
G
Whats wild is when magazine covers and posters of people made up of 1000s of ima…
ytc_UgyuBc7xj…
G
AI - Humans are outdated, let's replace them. Anyways, Humanity doesn't exist an…
ytc_UgxQ4er7y…
G
Bruh, this guy projecting his fear of the unknown on to what AI may do. Unless A…
ytc_Ugw-gD40h…
G
While I’m supporting you on most points (subscribed to your channel long before …
ytc_Ugy_rwcEr…
G
bh this program is awesome for artists. thinking of it AI art is mostly haed bec…
ytc_Ugz_zHbKQ…
G
AI generated CEO of AI: "We need more money to make more dangerous AI models tha…
ytc_Ugx9IRnYs…
Comment
Just used AI to summarize the AI transcript of this video to save me an hour and seventeen minutes of my life.
End result? AI hyping itself. The thing that it does really really well and which it profoundly prioritizes over everything else is **generating plausibly real output**. This occurs at the expense of actual real output. As this pertains to generating images - that's not surprising. But as it pertains to explanations of things, or homing in on facts or evidence (legal, medical, scientific) ...AI is a massive blundering idiot effective only in keeping us engaged. A shit show version of the Matrix. When people point to the idea that somehow this will all "improve" in the decades to come I get a chill down to my bones
youtube
AI Moral Status
2025-10-31T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDlAQpJvFbgGM4r4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjUsv4wUBOyvwEwRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwaha5FvqKpPn5hTL14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQ7alvMqtC2j7XWyN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyBNlFBAtH6vnIH_7F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD7d65FwNleHg6ndh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxdJNz5J6OmXBHtUHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzc0TaJYKf3z5Gpc6F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDSWrHCmEbQb6BGxp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzeml3bGYbm1250Kup4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]