Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On the point around why aren't more people panicking about this - I think there are so many hurdles to get over to take this seriously, not just intellectually, but on a gut level. From my perspective: 1. High uncertainty around whether AI capabilities will progress enough for this to be even a potential issue in the near future e.g. ~10 years - I can see reasons to expect that they may, but because this is so far outside our normal experience, gut instincts and healthy skepticism push back. 2. If it could work the potential upside is so high - let's say you get over hurdle 1, it's not like a regular engineering project in Rob's analogy - none of these might cure diseases, solve climate change and lead to so many other exciting potential breakthroughs. The bridge or the airplane wouldn't be worth it for a 10% risk or even 0.1% - but that is less clear in this case. How do you weigh almost infinite upside and downside risk? 3. Human curiosity - as you mentioned this stuff is just cool, the desire to see what it can do is so strong, even aside from the tangible benefits. 4. Is this even a solvable problem? This is the biggest hurdle for me - I just don't believe it's possible to guarantee safety from self improving systems that are smarter than us - I don't think it would matter if we had 1000 years - which we clearly don't. I even worry that attempts to control may set up adversarial dynamics which actually increase risk. 5. Maybe it will just be okay - not much of a guarantee, I know, but I didn't follow Rob all the way to assuming that the default outcome is bad - I think there are lots of ways it could go wrong and that the risk is significant, but don't think we know enough to know if the default is good or bad. So this leads to the "we need to stop" conclusion - again I don't think this is actually possible from an enforcement perspective. Plus, if you think like me that safety research / technical alignment will not work (at least to give anything close to a guarantee), then you're not just asking for a pause. Someone will work on this - then you will slow things down, but potentially only have the least morally concerned players in the game. With that said - a pause or slowdown is not no value - while I am very skeptical of control approaches ever working (and therefore guarantees), that doesn't necessarily mean that research could not improve the odds of it going well - this is unclear to me. I also strongly disagree with your stance that safety conscious researchers should be avoiding frontier AI companies - I'm with Rob in this, we need voices of reason within these companies. They will push on with what they are doing unless forced to stop, why put all of your eggs in this basket? The point is there is no guaranteed safe route through this - even efforts to enhance safety could indirectly lead to disaster. The failure mode I worry most about now is concentrated power in the hands of a few if we do get super capable AI - but even if this goes well, I think there is still a substantial risk form the AI itself e.g. our goals not aligning. I just don't know what to suggest we, other than try to tread carefully." I would be very happy to discuss any of this further if anyone wants to.
youtube AI Governance 2025-09-06T13:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgweKCA1oTiM6PI5UWV4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwf9tN-tAEsuxo-TmN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_R4N5onF8kDGmSZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxCBgfHZ8yRRuxdtWh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyq-5SP31UK4GUNRVh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwiAYRFuIc-HmBwpQ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIWl9yLq9dv-o9jKJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXI1M4asmvIRy-rRB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxFR7lBxPSNIkQfrLR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzsFOCHfMkj1bPSJrJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]