Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ohhhh so cute 😂 you are now blaming their education instead of blaming Artificia…
ytr_UgwR0robs…
G
this channel's personality (the person who gestures on screen from time to time)…
ytc_UgzksOlG5…
G
@ well, I’ll just quote OpenAI chief executive Sam Altman, in his testimony befo…
ytr_UgyeWPfEZ…
G
6:52 when chatgpt said "when I, when I said I was excited" It felt more human th…
ytc_UgxSX_G1l…
G
Agree. AI gives fast prototypes, but maintaining code and scaling up without bre…
ytr_Ugxl-x21w…
G
I draw sometimes and I myself am somewhat disabled by medical definition but I d…
ytc_UgwTqlB5Q…
G
there's actually quite a few ppl that just hate ai art in general, including per…
ytc_UgxyhyYG2…
G
EU is in another century- (to be clear, from the past…not the future)🤪
Deepseek …
ytc_Ugx2BMRvO…
Comment
I think a key disjunct between Ezra and Eliezer is that Ezra is underestimating the speed difference between the two types of conciousness. He's essentially saying, "The future relationship between humans and AIs will be like a father and son walking down a path together. The father has taught the son to follow him and check in periodically so that they don't get separated on their journey. As they walk, the two of them—together—will ensure they are both walking in the same direction." The problem is that humans will grow, progress, and think VERY slowly compared to an AI superintelligence.
The metaphor shouldn't be a father and son walking side by side, and planning out together where they are going. The metaphor should be a human and a fighter jet, side by side, are trying to coordinate a shared direction to travel in. The human looks up at the fighter jet and says, "Ok, I want you to go that way. *points* I think I know the right direction, but I'm not sure. So check in with me as you speed on ahead, and if I tell you to change direction or come back, you need to listen." The problem is that the human is limited by their biology (the speed that signals travel through neurons, the imperfection in the optical lens, the imprecision of muscle fibers). As a result, the human A) does not have the awareness or precision to safely aim the jet just by pointing, and B) by the time the human can see that the jet is about to crash into something, it's too late to stop it.
When the number of cognitive steps, the size of the knowledge base, the reaction time of AI superintelligence becomes multiple orders of magnitude greater than ours, there is just NO WAY for us to control the direction with a high degree of precision, because even a TINY miscalculation, at the speed and power that AI has the potential to harness, will result in destruction. The gulf between us and AI superintelligence is difficult to truly conceptualize, which is why I think Ezra misses the scope of the problem Eliezer is warning us about
youtube
AI Governance
2025-10-15T21:3…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]