Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope we give Will Smith a free I robot so it can slap the heck out of will smi…
ytc_UgweLtfvW…
G
“Umm actually there ai prompt engineers,” 🤓 , like no they actually call them se…
ytc_UgyAK7sFC…
G
Good, i hope everyone starts doing it. I can accept the idea of AI being a tool …
ytc_UgzzuIWrC…
G
I have used ChatGPT to help with extra information on a medical issue.
But you …
ytc_Ugzmes3AZ…
G
I’d say AI needs a kill switch, every model required by a global Internet law th…
ytc_UgxiSkXSL…
G
I really don't mind with ai images as long as they don't really make money, I ho…
ytr_Ugwb0jUyx…
G
My biggest gripe with AI is yes you can technically make anything. But if you ha…
ytc_UgwJKcfyi…
G
Dave's spitting a lot of facts here. Especially the pareto distribution comment.…
ytc_UgzhvxTO9…
Comment
I've never heard the idea that we are in a simulation address whether or not the "real" world that is doing the simulation is in a simulation or not. If there is some being or beings that are far more advanced than us running a simulation that is our universe and our being, then why are they not subject to the same line of reasoning? They would also have to acknowledge there may be some superior being running a simulation they occupy. This chain continues indefinitely because in every case the being(s) are more likely to be in a simulation than not. So what about the original beings creating the original simulation? I want to hear what people thint of them? In addition, what about this qualia? We don't even really know what that is. Nor do we really know what consciousness is. So any assertion that qualia, consciousness, sentience, etc. exists in our AI machines is really just an assumption based on no understanding of these things. The conversations around this are made of arguments containing air. There is nothing to them. It's like a religion in and of itself - not the other way around. If we come back to the foundations of science we have Occam's razor. Given we have two explanations for our reality - 1) we live in a simulation of higher order beings; 2) we are living in the most foundational reality - the latter is simpler and avoids the chain of simulations. Therefore, any intelligent and reasonable person would accept this unless they had more _scientific_ evidence. I do honor people are entitled to their beliefs and religions, so it is fine for Roman to believe we live in a simulation, but he must acknowledge it is a belief - and that is where I would be critical of him.
youtube
AI Governance
2025-12-04T18:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx6ymqnQJHthcAiFKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTN5E8uvZj0ScyaI94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOiy2PWZhc9awcpjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP_GVX8_nSqhTmDhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGu7pGzrDddN2UWnl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLDBhQdSidj8Elz_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHVZMyEE7R3BIdYR14AaABAg","responsibility":"elite","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf7DmTxmu75EmUODB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx2Ki_hPbGpda3OvV14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIxsR3Lqk5Hb9Nwst4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]