Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was a great conversation! Thank you so much for interviewing Nate Soares. He is a very clear thinker on the topic of AI safety. If you're interested in speaking to another AI Safety communicator, my top two recommendations are Rob Miles (the absolute legend) and Connor Leahy (a top tier communicator and a great down-to-earth guy). My thoughts on the end: Is AI really intelligent? Can submarines really swim? I think sentience and consciousness are red herrings. It is capability that we should be concerned about. If knowing that it is definitely conscious or if it is definitely not conscious does not change your predictions for what it will do, then consciousness is not a useful phenomenon to invoke. The "AI As Normal Technology" position largely isn't taken seriously within the field. There could certainly be some selection effects at play there, but I also just don't find the arguments convincing. As a counter, I recommend the AI Futures Project blog post "Al As Profoundly Abnormal Technology". In general, it is a bare fact of our current reality that the majority of leading AI researchers agree that there is a significant chance of literal human extinction from artificial superintelligence in the next several years, maybe up to two decades. There is no precedent for this, and for many reasons there is no plausible explanation to account for this other than that they actually believe what they are saying. The people who know the technology the best tend to be more worried about it. Lastly: join PauseAI!
youtube AI Moral Status 2025-10-31T02:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwcMdHiPdTgFCEJ9yV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzCo8EE_W1v1yLP2aZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQ6X2p5ivXXolDgdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDe4qf8L9tMvdtq7F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9eKoKBqUxZEUdiM14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWDTtG2hvhTEvjf8p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcsfS4IKgHj9eDrJh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6hKPvTYO5uTLhEUp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWjtc_wDUeskBZwYZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyJGki2dmO2trJ0wWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]