Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
have you ever thought why high achieving students are more likely to use AI? do …
ytc_UgzmwGXuD…
G
@canada1529 You remember that if you ever lose your hands and can no longer dra…
ytr_Ugz0442AE…
G
The world is incredibly naive about AI. You must research the scientists that ar…
ytc_UgxKs0Tle…
G
True art has something that AI will never be able to achieve because its maker d…
ytc_UgzZEB-DR…
G
We already tried that back in the 1600s. It went really well, but then the guy w…
rdc_d7kr08e
G
This video is biased. I use AI to write code and I have 0 code training, this ha…
ytc_Ugx3FHZAN…
G
The term "ai artist" shouldnt even exist, wtf did these "ai artists" do to get a…
ytr_UgwJyIrcS…
G
I understand you but the reality is. You cant fight it. The technology is now he…
ytr_UgzX7kR4k…
Comment
I've followed Eliezer Yudkowsky and Scott Alexander for many years. Since machine learning and neuron networks became a reality and evolved into what we today call AI, they've both advocated for caution - and their voices have hardened over the last years, to what some will call doomsday prophecies. However, their positions are not entire outside the realm of rational understanding.
Because at 08:30 we hear that Geoffrey himself estimates the outcome to be in-between (and I wholly agree). Unsure at where exactly that in-between that is, we can for sure say that this doomsday scenario is not a nut-case with close to 0 percent validity. It's an outcome we need to look at with seriousness. And currently, we do not. There are far too many interests that want to use AI for gains. Personal gains, national gains, political gains, corporate gains - and I believe it's a sure bet that a hidden fork of a large AI has already happened, but with different parameters, some that makes it much less ethical. But whether I'm right or not does not matter.
What matters is that because the possibility exists and because it has a certain likelihood of happening, we MUST not disregard it. Because if we don't get it right, there will be no do-overs. Instead of calling it AI, instead call it a meteor that may smash into Earth and turn our planet into space debris. Now, we have ways to try to deal with it - but imagine if we only had one shot. One volley of nuclear warheads that might alter the trajectory of that meteor. How lightly should we take those calculations? Sure, we may have calculated the meteor's trajectory a tiny bit wrong and the question would be moot because we were never in danger. But in case we are wrong - and in case we handle it wrong - there are no do-overs. Nu undo button.
This means that we must prepare for a worst-case scenario, because if we don't and it happens, we do not get to try again.
Now here comes the very sour grape: We cannot do anything unless all nations are united in this. And they are not, and probably never will be. A full and total surveillance of all network activities, across all borders, is the kind of policing we need to prevent an AI take-over with a high enough certainty to rest assured. But again - this doesn't exist and likely never will. So whether we get in trouble because of a single or a few nefarious actors or a whole country with a big enough grudge, doesn't really matter. There are too many holes we have not closed and that we will not close within the timespan of a decade - and by 2035, it might be too late. It's not for sure - but if things go wrong, they will likely go wrong within the next 5 years.
youtube
AI Governance
2025-07-07T10:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyM9-GV9ylQkvnoe5V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3JjbcO9WSiZAyd094AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjoXI3vrWhdcxLmKx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhoasHfaoJk4JL7RV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynsclRVW5hzrQx7Wt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw4P-EQI6itlPf61rd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzc27kJRGbkMdNWwJx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBnnotCe9soiC20sJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuAjmm9gSJOVIrdIN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw754Q5AI5T09ZB1dV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]