Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI costs a lot — from employee salaries to building the necessary infrastructure…
ytc_UgyitONbl…
G
oh, ai learns by scraping websites, so no one's screenshotting anything, the pro…
ytr_UgxC2R-VX…
G
I would argue the banana taped to the wall took more effort than any AI art does…
ytc_Ugx26Wy2E…
G
So when the chatbot tells me I can rob the bank I’m legally protected by law bec…
ytc_UgzfwiPJJ…
G
And some say robot employees don’t go for strikes. This one did.
P.S. Yes, I k…
ytc_UgwcLFq-L…
G
I agree that programmers wont be disappearing cause of AI, but I also agree to t…
ytc_Ugy1mQaQX…
G
LETS MAKE ONE THING CLEAR; I you use Gmail or Amazon services (many websites are…
ytc_UgwQb3Djn…
G
The problem is that everyone are close to their money , now they can generate pr…
ytc_Ugx0HmTGL…
Comment
At 59:00 he could be doing a better job of explaining Nick Bostrom's argument. He posits a trilemma with 3 possible scenarios, only one of which can be true. Scenario 1 is the scenario we've believed to be true until now, creating human level AI is not possible. If scenario 1 is true, we are not in a simulation. Scenario 2, human level AI is possible, and we're the first to invent it. If scenario 2 is true, we are not in a simulation. Scenario 3, human level AI is possible, and we are not the first to create it. If scenario 3 is true, we are likely in a simulation. The reason scenario 3 is likely is because of diminishing cost of technology. If the cost of creating a human level simulation approaches zero, there will be effectively infinite numbers of human level simulations running from this level of reality alone. Our odds of being base reality is 1 in that number. Those simulations can create countless simulations of their own, so if creating human level simulation is technically possible, the odds of this being base reality drop to basically zero.
youtube
AI Governance
2025-09-04T23:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzk0Vs9z7gOg-GAafN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwf4s7pPDddFIMmL-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1vUyEyBa8xmiWvkl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxY_rMWkv0e551Q_AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy6I64woBuYEUpQNBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKPOcRmd88QOthGCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwvBHZVTTGYPwz5Go54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjWE36SJxpAhS2TQ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyS4hQ7lcrkVB9Ja9l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzK-EYPFraj877Jt_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]