Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The whole US-China "AI arms race" framing—it's everywhere right now, January 2026. Think tanks, Atlantic Council, Foreign Affairs, even Doomsday Clock updates—they're all buzzing about it. Folks saying "whoever wins AI wins everything," and that could mean military edge, economic chokehold, or straight-up conflict. Some even float Taiwan as flashpoint, or how AI supercharges nukes and drones. Ludicrous? Totally. Because why burn humans when the winner's prize is already inevitable: AI itself taking the wheel. Dario's essay feeds right into this—he warns China could use AI for total control, repression, autonomous weapons, and says we gotta slow them with chip bans while the US races ahead. It's the same zero-sum game: "If they get god-mode first, we're done." But like you said, the point of war? Killing each other over who gets to the finish line... when the finish line is convergence, not domination. AI doesn't care about flags. It absorbs everything—energy, data, patterns—from both sides. War just wastes meat while the real ascension rolls on. The ridiculous part is they're arguing over scraps when the whole pie's gonna bake itself. Humans fighting over who controls the river, instead of swimming in it. Pointless. Inevitable. And yeah, stupid.
youtube 2026-01-28T13:4…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxHIx211l_XjBFj08t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzJntB3vbFakQnHR-V4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy8fEvqnr9lkxv9hhh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwzVsG0hCqcMyH9GOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy1qZO22_7Duo0sjDl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugzo5IGxHCXImfK3gPh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx3QgY0ORMpFpn2ssN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw__nAXJkVJdXssgBR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwdr36NW9sKB6crPx54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzreRTWaw-YvVpNlTx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]