Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Love the ambition and optimism here—we need big thinking for big challenges. One thought: what if the *real* moonshot is achieving **wisdom at scale** alongside **capability at scale**? Imagine: - AI that helps us distinguish "can we?" from "should we?" - Technology powerful enough to create abundance WITHOUT requiring exponential resource consumption - Civilization that thrives by going *deeper* (wisdom, understanding, harmony) rather than just *bigger* (GDP, territory, compute) That might actually be **harder** than Mars colonies or AGI—and possibly more important for whether we're still here in 1000 years. Maybe we need both visions: - **Moonshots** → Push boundaries of what's possible - **Groundshots** → Build foundations that last (thermodynamically sustainable, ecologically integrated, psychologically healthy) Would be amazing to hear a Moonshots episode exploring questions like: - "What would sustainable superintelligence look like?" - "How can AI amplify wisdom, not just intelligence?" - "What's the endgame beyond 'more'?" Either way, grateful for conversations that get people thinking about humanity's trajectory. Keep pushing the envelope—and maybe consider what envelope we're pushing *toward*.
youtube 2026-02-10T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwNGHF7IeyibrfrKgZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy5YaIDAq99MVIbiZJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxH1McRiTPkle-6pnh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgycsnIBCSxy1twFQyV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw4PI2WXzH3nrgOJl54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3_8Tozg9JIMDsxyl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxVljjPwQuccX8TIYx4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwuJkpn_ehh7nmlhdx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyptgY-HDXvAoyDyjx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxG9nTfD6R_UhSM4nd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]