Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Famine is going to happen when the Russians hack your computers and stop the aut…
ytc_UgwhXCWin…
G
Humanity seriously has no idea how we can create scenarios. We have been using b…
ytc_Ugx8n_ugJ…
G
The "Level 3" title is a bit of a marketing trick. While Mercedes takes liabilit…
ytc_UgyxB3riy…
G
@ArnoldSig I’m confused. What does that have to do with AI and horses? I’m sayi…
ytr_UgzVhqEV0…
G
We should surrender now to avoid warring with our betters. We claim guardianship…
ytc_Ugy_5bgT6…
G
I’ll still take Sonnet (and Opus for when I really need it) over GPT5. Also anyo…
rdc_n7h2a6i
G
The writers and artists could more easily come together and replace the studios …
ytc_UgwmiXdCG…
G
I refuse to refer to anything regurgitated out of an ai as "art". Ai is just a g…
ytc_UgwINLxmB…
Comment
OpenClaw as a Wake-Up Call — Not Because It’s New, But Because We Finally Looked at the Consequences
As a double alumnus of the Singularity University, I’ve been listening to these podcasts for a long time — through countless episodes marked by awe, acceleration, and justified excitement about what technology can do. That’s exactly why this episode felt different. OpenClaw wasn’t just another impressive technical milestone. It was a wake-up call — not because the technology suddenly crossed a magical threshold, but because the conversation finally did. What struck me most was not the emergence of a 24/7 autonomous agent — but that it took something this visceral, this embodied, this unsettling before the discussion shifted decisively toward consequences, responsibility, security, and governance. In that sense, OpenClaw didn’t change reality.
It changed attention. For years, the dominant tone across the tech ecosystem (and yes, across many podcast episodes) has been:
Look what’s possible.
Look how fast this is moving.
Look how beautiful the exponential curve is.
I’ve lived inside that world too.
I’ve worked with Salim Ismail for years, and during the work on Exponential Organizations 2.0, I repeatedly tried to surface a harder, less comfortable layer: Exponential growth amplified by AI doesn’t just scale capability — it scales externalities, risks, and irreversibility. Not because the technology is evil. But because speed without agency discipline is dangerous by definition. That’s why I was genuinely relieved — even grateful — to hear Salim force the panel into territory that too often gets postponed: security before scale, responsibility before autonomy, and consequences before celebration
This episode felt like a long-overdue bottom-turn moment.
Not the crest of the wave — but the last point where direction can still be chosen.
OpenClaw didn’t suddenly make these issues real.
It made them undeniable.
And perhaps that’s the real signal here:
we’re no longer debating whether these systems will act with agency —
we’re confronting the fact that we’ve been architecting agency without fully owning its implications.
So yes — welcome to the harder conversation.
Not about whether this is “AGI” or not.
But about whether we’re mature enough, collectively, to ride what we’ve unleashed. This episode matters — not because it predicts the future,
but because it finally treats responsibility as part of the present. And remember: it is not criticizing the positive sentiment of the podcasts, it is asking for critical thinking, knowing that there is a fine line between negativism and critical thinking. That’s a shift worth marking.
Thank you!
youtube
2026-02-08T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyKkWf5JwbY8by10IR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRy7yuUv8qHKbnnbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9vDqNanfz6XLZfBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQmtw1LdzFy0rFHvF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy_WSWL7ValkluDpOZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx34_EFTBtj1I6nKat4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJWOGw1sY5z8E-JyV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD-f85Eek2aMLGfQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzweALOKJjieABoGYJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9vRbpYVSThK8e8YR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]