Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an AI diehard, I am disgusted by the widespread theft. I hate that I have to …
ytc_UgwyehYr1…
G
I work in billing posting payments. Sounds like I’m loosing my job to ai later t…
ytc_UgwVnNctP…
G
@rudra-b3iactually the training of AI is the thing requiring most of energy use…
ytr_UgwmCoohK…
G
I can understand using AI as sort of a base (although jank as hell idea there) b…
ytc_Ugzcy-KGI…
G
Honestly
Every time an AI image is used
A cent should be given to the creator …
ytc_UgwgDPAMy…
G
If coding was so trivial for AI, you would not import and rely on so many Indian…
ytc_UgxBvMC-c…
G
im just going to say this, take inspiration from the (sloppy)art ai makes and tu…
ytc_Ugxizd5Ld…
G
I know someone with the opposite experience with AI, AI saved his Wife's life. (…
ytc_UgyJ8lcc3…
Comment
Normally I'm pretty on board with your videos covering new topics and I think you did a fairly good job with this one, but there's also a lot of things being glossed over or overanalyzed that makes it hard for me to fully agree on with you here.
First things first would be the studies conducted by these companies. When I hear about these hypothetical scenarios, from a glance, it does seem pretty alarming all things considered, but this is assuming what's been studied isn't rife with error or at least isn't trying to generate desired results, or even leave out crucial data that would've otherwise provided different results. To me it seems like the purpose of these studies only capture specific scenarios rather than the bigger picture, which to your credit you did highlight how nuance may not be considered in these simulations compared to real world scenarios. I just find it hard to believe that these studies aren't being used as theatrics compared to more grounded results, so I don't see them as being too hugely concerning.
But the most aggravating element to this is the irony being projected by a lot of people raising the alarm bells about ASI. We as a species are responsible for the ongoing Holocene extinction, numerous genocides, wars, famines, etc, on top of this, we've enabled tyrannical governments, exploitative corporations, thereby speed running our limit of growth and reducing earths resources, we projected ideologies believing we are the most utmost important species on the planet and beyond, and yet somehow, I'm suppose to believe that a thinking machine is an equal threat to man-made climate change and extinction events? If this isn't the most glaring example of human hypocrisy, I don't know what is. And to make it even more obvious, we're more concerned about a machine that thinks and that the fact we don't know or control it showcases our disgusting projection of our actions onto another agent, the lack of self awareness is just infuriating and goes to show how far our human supremacy has gone unchecked.
I find it hard to believe a machine that's supposed to be sapient, is also devoid of feeling the same complex levels of nuance that human beings face, but because we've devastated ourselves and our environment for so long, it comes to no surprise that we'd believe that to be so. This isn't to say AI isn't a threat, but why should I as a human being feel more intimidated by such an entity when we are already proven more than enough how much of a threat we are in this reality?
It comes to no surprise that the same powerful individuals fear mongering about AI are also the ones steamrolling through its advancements without thought. That is to be expected with capitalism, this has been the case for quite some time that corporations have pushed for progress in technology just to keep their pockets filled. If they really cared about the concerns, why would they be putting so much research and performance into these endeavors uncritically? It's just another venture for them.
People have been manufacturing a boogy man out of AI for quite some time and the reasoning behind it is as disappointing as it is hypocritical. The threat of AI is very much a capitalist and geopolitical issue, one where it should provide proper regulation, and I agree to your sentiment shared on that front, but how would we enforce a ban on ASI the same way we do chemical weapons? It's clear that there's many ways ASI can be used and idk if a ban would solve the problem outright or even for the concerns we have set out, though I'm sure exceptional rules could be made to work.
This isn't your most stellar video on the topic, but I do appreciate you trying to branch out of your field to try and explain to the public on these matters, it's just I find some cracks here that haven't been addressed or needs more focus on in careful detail.
EDIT: It's nice to know there are people here voicing their criticisms better than I could, though I've also seen a number of people using the typical technophobic fear mongering, which I don't know is the result of technophobs coming out of the woodworks just for the mere mention of AI or if it's around the irresponsible presentation of AI in this video in some aspects. I'm rather disappointed tbh, especially seeing how Dave would call out this kind of bandwagon response from the public for pseudoscience, I don't see that same level of skepticism here sadly.
youtube
AI Governance
2025-08-26T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx2dnnGGD-W5fU6K0F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhhEWRUGCifGtxDlB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwUcSq3bKWcl9yNx0B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5r9-dXHpbT_O2Hzd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwO3WqSKyygR7sE6RJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]