Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When Stuart says "These systems would rather press a nuclear button that be switched off" it seems like he omits the fact that we talk about LLMs, that is, models trained on human text data. They mimic humans, right now. They do nothing else. So it's not like a survival mechanism is making them do that as you imply, but simply their "mimicked" survival instict from their training data of humans. Now is that mitigating the dangers? Probably not...
youtube AI Governance 2025-12-06T13:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwevNhJIBFuQ1F2u3Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxe6B26TVkbyEztgch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw-i8C8QWaSGDCRWyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwE136FhVR86njTQGB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznqTbIiEeANSIYUTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJ9BSWbOpVZA-YvHZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxWfqSlCZ-0U8f7o6h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyI65zCqZZclUDKl2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6Z3yJc6RHdPbhxYd4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzU98Jx0tfK-uPAFV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"industry_self","emotion":"outrage"} ]