Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was when a colleague was talking about a compilation of "guess what's AI" vid…
ytr_Ugz0RWrFB…
G
Well, than at that point the AI will automatically have to be recognized as the …
ytr_Ugwlt0WfD…
G
A few weeks ago I was of the mindset that AI it's just another tool and that the…
ytc_UgyLOfwxF…
G
Imma just put my two cents in as an artist myself, my art style is a composite o…
ytc_UgwZCM5IP…
G
Kirby's Return to Dreamland music? In 2026? Paired up with AI bros getting shaft…
ytc_Ugy0VgP_1…
G
Such an insightful conversation as someone trying to understand the lay of the l…
ytc_Ugx7xhOVB…
G
We got 100,000 people from other countries who don't speak the language, don't u…
ytc_UgyOlzzeN…
G
poisoning the data pool is actually vital because, quoting FunkyFrogBait (cause …
ytc_UgwrlVL-1…
Comment
This interview was fairly enlightening, but I'm saddened to see Neil spreading misinformation about genAI. One thing he did get right is that genAI can only do things its been trained on, or in other words what already exists. He seemed to leave the problem at "well AI is nothing to worry about if you keep doing things that AI can't do/keep improving," which is to some extent true, but it ignores what I think is the principle reason why genAI being used for any creative endeavor is bad: brain rot.
When you outsource your problem to genAI, whether it be a term paper, a program, or an art project, you are forgoing practicing those skills to use work someone else already did. Over the short term, this is marginally okay, ignoring the blatant theft, but long term your skills will degrade. If you scale this up to our entire society, eventually you'll have an unskilled populace relying on a tool that can't innovate. What happens to our "exponential growth" then? All of our advancements as a society have come from having skilled people master their craft and share their knowledge with others so eventually someone figures out how to put that knowledge together to make something new. We are a people who have immensely benefited from the sacrifice and labor of our predecessors, but if we stop doing that, we are only screwing over our descendants.
This is to say nothing about what will happen to art and media if we let genAI make our music, write our entertainment, or draw what it can't comprehend. Which that is my second major issue with what Neil said: AGI is nothing *close* to what our brain is. It is an approximation of what we think our brains might be like, which almost certainly pales in comparison to the real thing. Don't get me wrong, AGI as it currently stands is intelligent, but it's not conscious. This is because all computers are deterministic, or in other words: given the same starting conditions and input, AGI will always produce the same result. This is not saying that if you give the same AGI the same prompt at different times it'll give the same answer, because by making the first prompt you change the model. I'm saying if you built the same model using the same data on identical machines at the same time and gave them the same prompts simultaneously, their answers would be identical.
youtube
AI Moral Status
2025-07-23T16:0…
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAxAfp0HrNJEtDJrh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGFeN7Sx5i3NSDEUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2GKIxUk892yaABSZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxmdp33praTZigNdTR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzcioOvqVDbHz6FR14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgynG7Bcdj9eMwXmYQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1ntHZgea8HIaBqwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhELyIPy_sMeVG7Nl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjqLVkRbazcTh3DB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1vkD0cT_zOCvuk_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]