Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is just politics of fear, which has been used in the United States and glob…
ytc_UgxOuVDrb…
G
Elon Musk should be in charge of the proposed AI regulation board. I don't thin…
ytc_Ugy8hrjk8…
G
The argument is that AI can be dangerous because they can have racist and sexist…
ytr_UgwHD0Phb…
G
Training AI on the Internet wasn’t a mistake; allowing the people shaping and mo…
ytr_UgxHdv_GG…
G
The main thing AI (Angels of Immolation) is going to lead to is new weapons of w…
ytc_UgwhphJhx…
G
@Bloopoopthats the whole point. It CAN’T portray emotion as a man made AI yet, a…
ytr_UgxbFIK7V…
G
My gf step dad made a website with AI and he was talking about how coders aren't…
ytc_UgyOpsGtB…
G
if u watched a 1 minute video and 2 seconds of it was ai footage while the rest …
ytr_UgyLnz2f8…
Comment
Thank you, Tim for orchestrating this discussion, providing references on screen as well as in the notes, as well as that fantastic transcript!
I like that Stephen approaches this not as a debate, but a thorough exploration and almost an exposition of how he navigates a difficult and esoteric topic that he doesn't quite agree with. Ultimately his position is the same of so many - that the _potential_ cost of avoiding or mitigating the risk is too great, even though the risk is Doom (ie myriad "bad things"), and nobody has figured out in any detail what that avoidance or mitigation may actually entail. It may be very likely that methods of avoidance or mitigation are actually very achievable and do not necessarily greatly impact the value or timeframe of attaining ASI.
To the final part of the talk around the 4hr mark, we are very clearly only racing toward Advanced Machine Intelligence (Yann's preferred terminology) without properly investigating safety measures. It may be that in racing toward a cliff, we could if we were prepared, jump across or drive around it. Instead, we might simply hit potholes and careen out of control or discover that the cliff is actually quite small, but still enough that without being prepared, we (the human race) crash and suffer/die.
The argument that _any_ attempt at safety equates to turning our lives upside down is, hyperbolic.
A lack of intuition should lead people to very carefully investigate areas of potential high risk before engaging in them. I have little intuition about sky diving, it should seem unwise in the extreme, to buy a parachute, hire a plane and leap out without any instruction. Maybe I judge that it's worth the risk... but I'll never know if I don't actually analyse the risks. The people I land on would likely have a different evaluation of the risks that I chose to take.
At 3:30:00 may I suggest that of the things that an AI in this context may be aiming toward or using as a step, is the equivalent of Move 37.
At 3:35:00 I might suggest blue-green algae as a biological system that when it encounters adequate sustenance and no counters, destroys all other life as a consequence of simply following its normal operations; It's only objective is to consume and reproduce. Even if it had some kind of self-awareness and even ethics, there's no guarantee that those ethics would be in a form that we might relate to, they might be quite challenging to comprehend.
youtube
AI Governance
2024-11-12T07:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwrs34dPhKqwKUNjat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmQr70otrPbIqXgvd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx3ZZ0fBzZs6ikKE8F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw0ULd8JLncDiNnD794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzKFqMFOBRSRYNZbgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPIOza9X46ztg-tHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxrKilhY4pmsWdQc5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyHjEFXpXmACP1JfAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgweOSrgE_M3vGGTCQB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxu0SgXLnEB6z3gy4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]