Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
About half of all published AI researchers say there is a significant risk of human extinction from superintelligent AI. Almost everyone working at the leading labs believes this. The two most cited scientists in the world are called the "godfathers of AI," and they believe this and are extremely worried, touring the world begging for regulation to ban the creation of superintelligence, right alongside the author of the standard textbook on AI. Whistleblowers and former safety researchers at OpenAI also have a lot to say about this, and literally quit their jobs and in some cases risked the majority of their net worth in order to warn humanity about what they know. Literally most of the world's top experts agree with most of what Yudkowsky is saying about the problem. Many of them just still haven't caught up to all of the reasons why we already know that their brilliant solutions definitely won't work, because researchers have already been down those roads. I myself spent a few years ingesting all the arguments and empirical data myself, and they are just obviously correct. The problem is not a few cranks. If you rank order all of the people in the world by how credible they are on this topic, by any reasonable measure you want to choose, the people at the top of the list are more concerned than the people at the bottom of the list. And a study ("Why do Experts Disagree on Existential Risk and P(doom)? A Survey of Al Experts") showed that the AI researchers who are ignorant of basic AI Safety concepts are less worried. In other words, being more knowledgeable about this topic usually makes you more worried about it. That on its own should make you more worried!
youtube AI Moral Status 2025-10-30T23:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzCQ-iUBGiJbotDjLl4AaABAg.AOvG2nqLvwuAOwENDmBfQh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyydrhSRJE9cP4zynx4AaABAg.AOvFV0kEEVsAOvSebWOPx5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyDuEBwo9z5ChmnA9N4AaABAg.AOvF8iNnKddAOvOLsudQZA","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyDuEBwo9z5ChmnA9N4AaABAg.AOvF8iNnKddAOwoZIRd89p","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyDuEBwo9z5ChmnA9N4AaABAg.AOvF8iNnKddAOwqnD4BTpC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyAT_ritWLYoa8ig-B4AaABAg.AOvEqiPULPiAOw0-2ulpDw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz-P8VsXJjKSXwLjHN4AaABAg.AOvDxomYDG_AOvJ1C4UJFH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxSjbbmRsPdPz8DLRV4AaABAg.AOvDXVlfGR6AOvGjVKqzhz","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgzrmdAGaBxHu3fE2od4AaABAg.AOvDU5fjZeZAOxh6dRK6iu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzmiJxClhPU4ivMYwp4AaABAg.AOvCsR6OqKfAOvsD1rGUcF","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]