Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Frankly, I find the "not much headroom above us" assumption as terrifying. If we take it as a base that Von Neumann and I are roughly as intelligent, then the difference would have to be in him applying himself to the problems with more diligence and discipline, which for an AI are just different terms for compute. Would that not mean that AGI == ASI? As a cluster of AIs at human level would be able to simulate thousands of Einsteins and Von Neumanns without any additional break throughs in intelligence. As for "where are the politically minded?", I think there are a few issues. The currentness/or the visceralness and the scale of the ask are two of the bigger issues. Israel - Palestine is happening right now, in HD, so it is easy to get engagement, the ask is also pretty low, as the US(Possibly even the EU) could force a stop to the conflict, without really needing to even ask either side their opinion. Reagan ended a similar conflict in South Lebanon in a single phone call, so it's not like it'd take some herculean effort from the white house. Climate Change is more similar to AI, and we see many of the same challenges, but it has had decades to spread the message. We still see people getting more upset with somewhat annoying demonstrators than they do even the worst polluters of the business world. They get more annoyed at radlibs blocking the street than BP. To slow down AI would be to try to get out ahead of hundreds of billions of dollars, and the track record on those fights are not great. As an example: some 200 million Americans are not interested in cutting Medicare, Musk is though, so medicare cuts you get. On Miles' apprehension to pause, I think it is pretty easy to make the case that stuff like image or music generators are of such marginal utility that if we make some sort of broad strokes compute cap that cuts them out, just to get started on a safer track, that is more than worth. If we have a pause(which does not have to mean permanent stop), then we're in a much safer spot to piece by piece enable technologies we can tell are clearly safe, rather than to try and take a nuanced approach, which we might not have fully sketched out by the time AGI is here. The burden of proof should fall on developers, to show that their models are safe, not for the rest of us to prove they are not. So a broad stroke, ban hammer approach is arguably an appropriate suggestion. Shouldn't be all that difficult for a song generator to prove it is safe(and if it turns out to be, then that knowledge is arguably of more human utility than the song generator itself). Long term I don't think pause/unpause is a helpful framework, I'd rather us have a model more like space exploration: you'd need to have a pretty convincing story for sending a new type of manned spaceship into space, and if you wish to connect to the international space station you'd probably need to make that safety case for an international board. Similar, I think any frontier research in particular should follow a trajectory of safety case -> international scrutiny -> execution. And I think it is completely inappropriate to have data centers that are locked down to one company, arguably they should have read only access for everyone who cares, from any nation. The safety requirements for such global readonly would be a good warmup for the cyber security needed, if AGI should become relevant, anyway. I don't think any model of AI companies running safety the way regular engineering companies do is appropriate, as there is no version of liability that would be sufficiently threatening when dealing with God Creation, unless we did comic book villain shit like grab their parents and tell them "AI does anything fishy and mom's head goes boom", but that seems a lot worse than some version of CERN for AI, like Hassabis has suggested.
youtube AI Governance 2025-08-23T20:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_OeRdRDZvoHLYB_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0j2yZJYH4UNuVdWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgynmuJ0ySanHIlMCDx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzufn757Y9_iPiaeHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbmEnk5JRedDqQOkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRnQqCN7aQdjx4uGN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyjbFjVx6bScgzZlSJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyNPlF-I0dg1G2PEEp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwdgVMsZu3ZbdBMTOF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-qzznYwo1reEFsad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]