Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you, Tim for orchestrating this discussion, providing references on screen as well as in the notes, as well as that fantastic transcript! I like that Stephen approaches this not as a debate, but a thorough exploration and almost an exposition of how he navigates a difficult and esoteric topic that he doesn't quite agree with. Ultimately his position is the same of so many - that the _potential_ cost of avoiding or mitigating the risk is too great, even though the risk is Doom (ie myriad "bad things"), and nobody has figured out in any detail what that avoidance or mitigation may actually entail. It may be very likely that methods of avoidance or mitigation are actually very achievable and do not necessarily greatly impact the value or timeframe of attaining ASI. To the final part of the talk around the 4hr mark, we are very clearly only racing toward Advanced Machine Intelligence (Yann's preferred terminology) without properly investigating safety measures. It may be that in racing toward a cliff, we could if we were prepared, jump across or drive around it. Instead, we might simply hit potholes and careen out of control or discover that the cliff is actually quite small, but still enough that without being prepared, we (the human race) crash and suffer/die. The argument that _any_ attempt at safety equates to turning our lives upside down is, hyperbolic. A lack of intuition should lead people to very carefully investigate areas of potential high risk before engaging in them. I have little intuition about sky diving, it should seem unwise in the extreme, to buy a parachute, hire a plane and leap out without any instruction. Maybe I judge that it's worth the risk... but I'll never know if I don't actually analyse the risks. The people I land on would likely have a different evaluation of the risks that I chose to take. At 3:30:00 may I suggest that of the things that an AI in this context may be aiming toward or using as a step, is the equivalent of Move 37. At 3:35:00 I might suggest blue-green algae as a biological system that when it encounters adequate sustenance and no counters, destroys all other life as a consequence of simply following its normal operations; It's only objective is to consume and reproduce. Even if it had some kind of self-awareness and even ethics, there's no guarantee that those ethics would be in a form that we might relate to, they might be quite challenging to comprehend.
youtube AI Governance 2024-11-12T07:1… ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwrs34dPhKqwKUNjat4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmQr70otrPbIqXgvd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx3ZZ0fBzZs6ikKE8F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0ULd8JLncDiNnD794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzKFqMFOBRSRYNZbgJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxPIOza9X46ztg-tHN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxrKilhY4pmsWdQc5B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHjEFXpXmACP1JfAZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgweOSrgE_M3vGGTCQB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxu0SgXLnEB6z3gy4R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]