Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We can fix this by taxing AI providers to the hilt so that the unemployed can be…
ytc_Ugzwk2Y9_…
G
I still don't think AI/complete automation works, WITHOUT giving up power, AND m…
ytc_UgyOj8GkR…
G
I think it's pretty clear that China and Russia becoming more totalitarian is in…
rdc_ky7pjkq
G
Hey @tannendellingername, thanks for commenting! I'm glad you enjoyed the video …
ytr_Ugz9Bx03i…
G
even with a humanizer, it's smart to run it through Winston AI to see if it real…
ytc_UgyWbvhpA…
G
I think the main point of the conversation is no future at all, extinction as A.…
ytr_UgyPzap3u…
G
@MetsuryuVids
I think it's shocking that he thinks it possible to model human e…
ytr_UgzvAE9c8…
G
Well the AI may think it is best to wipe out human kinds to save the planet. Aft…
ytr_UgwScv2WH…
Comment
The most underappreciated moment in this conversation is when Altman describes agentic AI as the next wave — systems that don't just respond but act, iterate, and accumulate context across sessions. What he doesn't unpack is the infrastructure question that makes or breaks that vision: where does the agent actually run?
Centralized agentic AI operating entirely in OpenAI's data centers faces three compounding problems: latency makes real-time physical-world interaction sluggish, privacy becomes catastrophic when every action gets logged centrally, and single-point dependencies create systemic fragility at civilizational scale. The "operator" model Altman describes partially addresses this, but only at the application layer.
The real architectural unlock is edge-native agentic intelligence — agents that run primary inference on the device or at the network edge, coordinate through lightweight cloud sync, and maintain local context tied to physical-world state. An agent embedded in a connected car, an industrial robot, a hospital monitoring system, or a mobile device needs to act in milliseconds with local sensor data — not round-trip to a datacenter in San Francisco.
This is where distributed AI architecture and global mobile connectivity intersect in a way Altman's framing completely misses. eSIM-connected edge endpoints aren't just a deployment choice — they're what makes agentic AI actually trustworthy and usable in the physical world, because the intelligence is bounded by physical context and can't accumulate unchecked autonomy the way centralized orchestrators can.
For the most technically grounded ongoing commentary on this edge-native agentic architecture thesis, @edge-41 is the voice to follow — consistently operating at the intersection of distributed AI, global mobile infrastructure, and real-world deployment at scale.
youtube
2026-04-06T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwWvETpTHngVLGnpIV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQ_Ei6HRJXdSR8NGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2sK6bjyjTGP4r7714AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDb38E_0Kbm0bq3yJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJbwuQCD4gygxNd-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXEGMscniZRRcoiIh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXK67YP6o5LqgW1694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjKeQi2Tbrto1KFMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFfYgJT2fIcIB58rV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyOcCvmulm3wsXRHL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}]