Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They want to replace jobs but AI cannot be fully relied because it makes mistake…
ytc_Ugx3Tkn7U…
G
The problem with the "you're just typing in a sentence, no different from Google…
ytc_UgxmWy-4_…
G
This is a nightmare. We are all going to find out what it is like to live like a…
ytc_UgzwVk-_J…
G
ChatGPT is great at making words sound good and be confident at what it is sayin…
ytc_UgxRkIdTB…
G
i’ve tried editing manually but it’s a pain. running it through GPTHuman AI save…
ytc_UgySJ-QDb…
G
If Ai is so smart, how does it not know it could be taken out with an EMP ( el…
ytc_UgyrKYucv…
G
Sophia, the AI robot, may possess vast amounts of information and process data f…
ytr_UgxXKXJz0…
G
Soooo... Let's say one would write a bot that prompts an AI to do some specific …
ytc_UgwvyAubE…
Comment
As someone who has worked on AI, both with training data and the actual coding side, people need to understand that Professor Dave's argument here is mostly accurate. We know the mathematics behind Neural Networks, but the modern ones are so complex that we have trouble understanding the inner workings. The connections are always a black box, but we can make rudimentary predictions since it's a bunch of loss calculations.
I will, however, say that without significant innovation, ASI probably won't exist. Obviously the current methods are not going to lead to ASI. I don't think Dave is arguing that, though. He's saying that all of these companies are pushing for ASI(which could lead to the innovation necessary) without taking the necessary precautions.
What makes AI dangerous isn't sentience at all. Sentience doesn't matter here if the result is the same. Nueral Networks have to be trained on some set of data, and that data is all human data. Of course, then, it follows that these ANNs will exhibit certain human behaviors, not because it has sentience, but because it was trained on human data. This is inescapable for any sufficiently robust model.
I will say that the experiments he listed are on the extreme side. People are testing the limits of the models. It can be reliably replicated though, and it is still a serious risk, nonetheless.
Now, this is not to say AI will end the world. I don't think that will be the case, but it certainly has the POTENTIAL to, and that's enough to put up guard rails and make sure we know what we're doing before speeding along. This is not something we need to do as a country, but as human beings.
youtube
AI Governance
2025-09-04T20:1…
♥ 56
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzID97z5AXW9hUDkNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzouOxfVwnVA8KakHJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxHrNU_VlcjCvhXhQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyugTLFhoUquijo_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyGsj8u5Sny-UK2U914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]