Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the thing is that if AI art didn't steal from artists who worked hard to become …
ytc_UgwPCET18…
G
@joshanonline Yeah, it's touch and go on the influence part, because it ultimate…
ytr_Ugz59SN3S…
G
AI companies are borrowing way too much money from banks. A sharp downturn rippl…
ytc_UgyFkATCd…
G
Still listening of course, but what I'm mostly afraid of is AI recognising patte…
ytc_Ugyvvm-sx…
G
I think AI still has the limitation because it is not "real" as in not directly …
ytc_UgxyeCPmd…
G
Really enjoyed your video. Wanted to add to your point and say yeah, speaking fo…
ytc_UgxBNPJmF…
G
Humans Need! To be productive. Ai is a choice, we can choose to what extent WE r…
ytc_Ugwm7LDkT…
G
I don't think graphics programming can be replaced, cus ai doesn't know what loo…
ytc_Ugx4EHjJh…
Comment
I’ve done a bit of research into the topic of AI superintelligence thanks to doomscrolling. I’m no expert by any means, but I just…can’t fully agree with Yudkowsky here
He’s right to be concerned, the tech companies making this stuff have shown time and time again that they don’t give a rat’s ass how much harm their tech causes, especially Sam Altman and Elon Musk, but I think Yudkowsky’s view is pretty flawed in its own ways too
From the way he writes and talks about superintelligence, it sounds like he expects it to just pop up one day-suddenly and without any prior warning. That’s just not how it works. He said in a tweet that his arguments don’t rely on an intelligence explosion/FOOM but it’s really hard to accept his arguments without it. If the development of superintelligence is slow, it means that we have plenty of time to catch these AIs in the act and take precautions. Even in the takeover scenario they provide in their book (to their credit, they did say this was almost certainly how things were gonna play out) its hard to see how an AI could do things like make bioweapons factories, hack crypto banks and sabotage the competition without people noticing (the scenario says that the AI doing these things is not superintelligent and is trying to manipulate humans into making it superintelligent)
So while I do agree with the overall premise-we shouldn’t try to make superintelligence so soon without proper regulation and safety research. I can’t see how superintelligence will suddenly spawn in overnight like they suggest it will
youtube
AI Moral Status
2025-12-06T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQtfQccEd6wNZMJod4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzC3hjBhUyU0PlGd2B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxji58LJykrzd0KVip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzzgAGML7mk2Tgao9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLp1OM9DGWXQvgxCR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwU9XaDkAC4DPouC4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyg6EFuaZ7tjPIrg5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmLpVDOqoFYB2V6h94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxz9pT9Iu8JZlGhd354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTQ2SHbyUoWMyXRtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]