Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@sfmzaI loved them! it came out better than I expected. For some songs, I had to…
ytr_UgwO4x22-…
G
@RagdoLL86 Okay, that's a point - but my opinion on this is still largely uncha…
ytr_UgziSrHRd…
G
The amount of AI adoptables that COST MONEY to have etsy is littered with them…
ytc_UgxHRa2lU…
G
The judge issued an incorrect verdict (on 2:50), equating human learning and the…
ytc_UgyKHWjaH…
G
Well, there have been renowned scientists, including Stephen Hawking dedicating …
rdc_nudfh92
G
AI thinks, but when you are selective with what it can use to learn from or deli…
ytc_UgxCFCH_d…
G
Thank you—that means a lot. Curiosity is often just slowing down enough to quest…
ytr_UgySm8cgI…
G
It's a pleasure listening to this eloquent woman speak, but my heart breaks for …
ytc_UgzyP1-UX…
Comment
I have read this book and I think it does a very decent job explaining its core arguments, but I don't think this interview makes the book come off as reasonable. Despite Hank bringing it up multiple times, they never actually get into why it seems probable that superintelligence will be developed, or why aligning it is so hard.
I suspect most people who watch this interview (without already being read up on AI alignment discourse) will come out thinking it's an exaggerated conversation about scifi concepts. I hope I am wrong.
youtube
AI Moral Status
2025-11-02T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjZyTJQdV33bw0vop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCMEtyTtZwynwkXrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh3riF0-4UK4etQ0d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw7UPSqMIu1xFiIUSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrp8HbL5oyccS7tDh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJZ5WYBWtWhye6KXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7_T-EMPRxzTRgF_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2ds2xE56wcAnbRrZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRM1UtUh06iVVjG654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22W8hz_3dOr8fC7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]