Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ProfessorDaveExplains Hey I know I'm a little late to the party here, but I'd like to comment on this anyway. I'm a software developer so I have some insight about what AI can and can't do, and it's 'motivations' for doing the things it does. I know I'm like the millionth person to say it but I'll say it here anyways: Super AGI poses an existential risk to the entire planet. That is not hyperbolic in the slightest, I am 100% sincere about that. I think if we can get key countries, especially China, to the table on this we can make some real headway in mitigating that risk. But all these people talking about China nonstop isn't just about economics. China will continue their AI research in earnest unless they are a part of whatever deal is made. Even if a deal is made, rest assured research will continue, only in secret where people aren't being scrutinized as much. This is just as grave a threat as nuclear annihilation, but legislation is less effective. You don't need plutonium or enriched uranium to make an AI model. Hardware is expensive and can be hard to get in places like China, but they get it anyway, it gets cheaper every day, so responsible actors have no choice but to play the game, if only so that they can stay ahead of bad actors. I know it sounds like I'm simping for AI companies here but I'm honestly not, this is just my opinion so take it with a grain of salt. TLDR: China is a real threat in more than an economic sense, and responsible actors should engage in AI research in spite of the threat it presents.
youtube AI Governance 2025-10-31T04:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz7FqPfTG0hpaYJL7t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwM5lObwpSiBg8l-NR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwK65xFF_WEQfBQs2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWJyBA9vyOCknGktx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyLO8XPBKRKjxL0T894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]