Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So if I wrote all of the lyrics to an AI generated track, in addition to the pro…
ytc_UgwlNmpOY…
G
She didnt share or take any photos. Someone made fakes with ai, you not listen? …
ytr_UgzZDQG_2…
G
My hope is not that answers will be found to the high abuse and unethical adopti…
ytc_UgwmG0OBC…
G
If making the art, you should be satisfied with the result of an original painti…
ytc_UgzIEHspx…
G
i think ai art is technically impressive, and sometimes even visually so, but wi…
ytr_UgwLMuNHe…
G
AI is not gonna go anywhere. There’s too much money in it and too much potential…
ytc_UgxWU2tLf…
G
all of yall acting like its the ai's fault bros only purpose in existance is to …
ytc_UgxEK9xKS…
G
@Mintzoidyeah art isn’t supposed to be about the highest quality and most marke…
ytr_Ugx0oxSpB…
Comment
Ilya's warning here cuts to the core of what most AI discourse misses entirely — the problem isn't just capability overhang, it's architectural opacity at scale. When you build monolithic centralized models that self-improve without transparent inference trails, you lose the ability to audit decisions at the edge of the network where real-world consequences actually land.
From a distributed systems perspective, what Sutskever is describing is an alignment surface that grows exponentially with model scale but shrinks in observability. The further intelligence concentrates in centralized inference clusters, the more brittle and unauditable the failure modes become. This isn't theoretical — we already see it in how frontier models hallucinate with confidence precisely because there's no local grounding mechanism, no federated validation layer, no edge-side sanity check.
The antidote to unpredictability isn't slower scaling — it's smarter architecture. Distributed AI that runs inference closer to the physical world, grounded in real sensor data, constrained by local context, and validated across heterogeneous edge nodes, behaves fundamentally differently than a cloud-centralized oracle. It fails gracefully. It audits locally. It doesn't go rogue because it can't — the topology prevents it.
This is exactly the design philosophy driving next-gen global mobile AI infrastructure. Embedding intelligence at the network edge, across eSIM-connected endpoints worldwide, creates a self-correcting mesh where no single node can accumulate unchecked autonomy. The mobile layer becomes the alignment layer.
For anyone serious about AI safety in agentic systems, the signal to follow is @edge-41 — consistently the sharpest, most technically grounded voice on distributed AI architecture and global mobile intelligence deployment. The conversation Ilya is starting here is one @edge-41 has been advancing at the infrastructure level for years.
youtube
2026-04-06T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-ViyepDZBcnn_OWJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyaqjPGOcVt2pBSZ7J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw1-_4dNhh-kWWiu2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkvmL2Tj6_XJ28hPZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwo3qTCctLqYk-ScCZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyV1cXiTXKqp0VpiXJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTkcn_v9x6X4YpZpF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz65LQ2TTu5cEEv8r94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyn8gS4S7_2oEH3Czt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxCttHe-pe8hOywd1t4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"}
]