Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My issue with Dr. NdGT's, optimist and lax take on AI and likely AGI, is how corporations, governments, and people in power will use them. They've talked about it here in multiple examples. Nuclear technology is amazing, but political powers have forced the invention of the atomic bomb. In recent history we have a more analogous technology to AI. Social media. Social media is great, you can get to know what your friends and family are doing even if you're on the opposite side of the world. But because of social media, we've had a wave of right wing populist leaders all over the world, Trump, Brexit, a number of dictators in third world countries. Hasan and Neil also brushed the topic of measles. Fake science in social media even more so during the pandemic, have led to preventable deaths/sickness. This was/is because we were too lax on seriously tackling the problems of social media, in particular misinformation, disinformation, and propaganda. These are the dangers of AI that people are rightfully wary of. More than just taking white collar jobs, but the actual impact we should be worried about is how people in power and in control of these technology could set society back as Hasan and Neil have put it. AI is great! But the people in power, corporations, and politicians when unchecked have all the means to abuse the technology. It's hard to chalk it up to the notion of things will even out in the future, when millions are suffering right now because of it. NdGT hit it on the head when talking about the assault on academia (39:05). I wish he has this similar attitude on AI. Later he says, "There's a lot getting broken now that took many many decades to build." Precisely! You have to see the consequences of AI. Of what people in power might use and abuse them for. Right now, we may actually have a chance to prevent a preventable bad thing. The dangers of AI and AGI is not a machine uprising (at least not for now), the dangers of AI that we should be cautious about is how this technology could be abused by corporations, billionaires, politicians, bad actor governments, and so on. But that's not gonna happen if we're too lax or too optimistic on AI.
youtube AI Moral Status 2025-07-27T11:2… ♥ 87
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwuIz_JIB2wwL-OZH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzJnCrviGed1-IiEw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugynucoks-FhbrXibMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxu14ywDNNhHZR2WnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyDy01d_yjJtEA9zRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyUN7IIP2mbsfJKJeB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy7oG9lmMR0PnVUAX14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyzYjO1bU-qA12euBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwueXC8tMiKJfWyOAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzzgufQiCtRmKqONbB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}]