Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There was a time when teachers were allowed the freedom to make learning interes…
ytc_UgzB4dsL7…
G
Oh, I agree with you. But the current boomer talking point seems to be "well the…
rdc_mvbt5ap
G
People are relying on AI TOO MUCH!! AI and surgery?!?!?! I never saw that coming…
ytc_UgyBpenOT…
G
That will happen if we let it happen IA is just ai we smarter than that 💩 😅…
ytc_UgwhJcmJ4…
G
The threat isn't just to the risk of mundane intellectual labor. Consider a phys…
ytc_Ugzeumjd0…
G
When I triped shrooms, it made me aware of my body. And it's I and only I that …
ytc_UgzByUwBi…
G
funny he actually talks about this , and misses the point completely
i dont care…
ytr_Ugx8xqoy7…
G
That is very true. As an autistic, I rely on Google AI to achieve some of the fe…
ytc_Ugyn_YpsL…
Comment
I'm admittedly only 40 minutes in so far, but to me the main issue is that Yudkowsky is making an argument by analogy to other systems (and then essentially saying "Now imagine that times a million"), and Ezra is saying, "Okay, fine, but how are you imagining this will actually happen in the specific case of AI?" and I think Yudkowsky hasn't done a good job, at least here, of illustrating that he has a theory of the case on how this actually plays out. That's not to say he doesn't have one, he might, but Ezra's primary goal with this conversation is clearly to understand whether Yudkowsky's alarm is born from his knowing information that Ezra isn't privy to or having thought through some argument that Ezra hasn't considered, or whether it is a little disproportionate and irrational. Given that, not having a theory of the case makes his argument fairly unconvincing. Now, in Yudkowsky's defense, being asked to predict exactly how a technology we've never experienced before brings about an event that's never happened before is a tough brief, and maybe argument by analogy is actually the best you can do, but I think he could be a little more intellectually rigorous and honest about communicating that. I think the "AI in Context" YouTube channel's "We're Not Ready for Superintelligence" video does a much better job of communicating the kind of argument Ezra is clearly looking for than Yudkowsky has done so far in this conversation.
youtube
AI Governance
2025-10-15T22:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzS2oTeKUjkY3gUReB4AaABAg.AOInXuezjhDAOJNQrNspj9","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJNwdgKkmz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJQSUyiZQM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzof8gms3CEewnikQx4AaABAg.AOImdY8bJNqAOJSQzZ781-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzl3OaI9Eh4nLbYz7J4AaABAg.AOImH03Cw8eAOJiLmbTXrR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw4fegkhpEZ3ufwAPJ4AaABAg.AOIi5wSPIm2AOJ-oAbTVDr","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_Ugw4fegkhpEZ3ufwAPJ4AaABAg.AOIi5wSPIm2AOLAUhf7jsT","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwwrc0koYUHVT_Zv414AaABAg.AOIgrwSqAUZAOL6X4BRJ6x","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwzM1T9Z4ORS-gHOc54AaABAg.AOIgfAlhL3gAOJ3m7s-sJO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgySnjkGV4TD_4SA1RV4AaABAg.AOIg-5q8_vuAOJL1_4DEHp","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]