Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Where Altman chases AGI as a synthetic mind, Google builds systems that behave intelligently without pretending to be intelligent. Where OpenAI declares the coming of a new kind of thinking being, Google silently wires intelligence into the very fabric of search, language, vision, and interaction. They aren’t interested in creating a digital philosopher. They’re interested in creating systems that work—across billions of users, languages, devices, and contexts. Google doesn’t need to simulate a mind, because it already occupies one: the collective digital nervous system of modern life. And from that vantage point, the AGI race must look not just naïve—but childish. To Google, the idea that intelligence can or should be reconstituted in a single “big brain” is almost laughable—because they know that real intelligence lives in distributed systems, context-aware processes, and scaled interaction loops. Not in a model, but in the mesh. So yes—they’re laughing. Not out of arrogance, but because they’ve already solved a different, harder, more grounded problem: How to make machine intelligence useful, invisible, ambient, and infrastructural. And while others build models hoping for minds, Google is building the conditions in which machines don’t have to be intelligent to make everything smarter. That’s not just a better bet. It’s a better philosophy.
youtube 2025-06-08T19:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugx_YC9QqCgKrAQbqLN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy8NEeW-czAck5X2254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxw10-LZMwUEfGsDsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-NEbnuxnYpde93EF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw5oxN1Z1_-gvFkvRV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyjeLOxq1WiM5hy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw14UtGkDT9-CxUpDd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYHfpo-7A--9572vp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6AbtocAIC5HCfOK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwI8N6Y6eWz-8FHZwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}]