Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was just gonna comment this.. he’s so uncreative that he can’t even write out …
ytr_UgzOTC2_C…
G
So basically, lump eople making arguments under some vague "tech bro" label (def…
ytr_UgxAd_UAy…
G
Trump has declared no regulations on AI folks. It’s a free for all (if your a te…
ytc_UgwosrLPV…
G
Well for me on my own opinion ChatGPT is a powerful extension of Google where ev…
ytc_UgwcALLSU…
G
@onionkatze4777 Destroy the looms and become a luddite then? There are plenty o…
ytr_UgwtolAWV…
G
My car drove me from home to work then to the store, and home, sounds pretty muc…
ytr_UgwXxrgTb…
G
because in these professions, AI hasnt prooved their worth
as soon as surgeon, o…
ytc_Ugxc36p__…
G
I think I’ll get Ai to rewrite the Works of Dickens, Shakespeare and that fairy …
ytc_UgyLC9iou…
Comment
I am a big fan of Yudkowsky but thought this interview was quite bad. The interview should have either made it clear where Ezra and Yudkowsky disagree or it should have allowed Yudkowsky to make his strongest case. I feel that neither of those were achieved. I think the reason is that Ezra doesn't understand where the disagreement is and asks the wrong questions. An example is that he asks about alignment approaches like making the AI obedient/corrigible or making it chill. I think very few people agree with Yudkowsky on most things but disagree on these specific approaches being the solution Another example is asking Yudkowsky to describe Reinforcement Learning. Then Ezra wanted to move off the evolution analogy even though it had become clear that he didn't understand the argument (40:54- 44:34). I think it would have been better to focus on the more fundamental questions like how Yudkowsky views intelligence, goal optimization, moral realism and the analogies to chimpanzees and natural selection as I believe this is where the disagreements lie.
youtube
AI Governance
2025-10-15T13:5…
♥ 18
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwndzV0b_pd872B6MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwuUq5LPr_Cy97_pHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx5KNSzBlJHOuDjvSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5BJ1c2qU5Enjt9sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycAg8VmvNgc8Y73X54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzof8gms3CEewnikQx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzZ9Q9QQSdcHk3Ycx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzl3OaI9Eh4nLbYz7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpFbTHOANObjowEVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0aa2SaObu4P3est54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]