Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The government should make it standard that all self driving cars have lidar and…
ytc_UgwL6XwlC…
G
I agree. Further, my thought is that if a conscious self is a combination of mem…
rdc_mdjnfdi
G
Parents seem to be blaming a chat bot for this tragedy. What did they do to sto…
ytc_UgxsLiaPf…
G
So why it says they dont use my chat for training data? Or am I just too inhinge…
ytc_Ugyvn5WhM…
G
Why people still calling AI "artists" artists? They are literally not artists wh…
ytc_UgyLTKkMW…
G
I think the far more dangerous path is the deep fakes that will come out of it. …
rdc_mbwckak
G
You're right.
But one thing about the shopping carts: in Europe, in many places,…
ytc_UgwUvAR_Z…
G
One of the best talks about AI that I have heard so far. We can learn more about…
ytc_UgxyKHoYm…
Comment
Wow … watching Steven shuffle papers and books at the end of this podcast clearly indicates he is intensely uncomfortable with the projection of the future that AI development is bringing us and dispirited by the loss of agency of living in a simulation He seemed to be desperate for a jailbreak to escape the implications but wasn’t offered one.
I have this offer of solace. If we are in a simulation run by a super intelligence then we don’t have to worry about AI safety … the singularity has already been hit. If Dr. Yampolskiy was actually certain about the simulation he would not be worried about AI safety either. It only makes sense to worry about AGI and ASI if we are not yet in a simulation.
I think the key group to convince first about AI safety are the central bankers starting with the 63 members of the Bank for International Settlements. If ASI gains control of the central banks it can enslave us all. The banks have to invent a way to completely isolate themselves and never implement AI past narrow AI tools. After that gets locked down the security can then be extended in a set of isolated rings (moats) to investment and commercial banks and to stock exchanges. Super AI cannot control us if it does not control financial flows. Think of nuclear armed missiles in a silo. It takes two humans turning physical keys simultaneously to launch. A super AI residing in a data centre server could spoof the order to launch but couldn’t physically turn the keys. So at least there is a physical breaker to get past.
youtube
AI Governance
2025-09-06T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwp4MzVvc3bgfHwQvR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVXY4mSceJIVOJsjF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBxLn3POPv134YyGl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZRB_hVTSFXPw8fKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEpdZFqgoWy2esJfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyDxJ7NCu3orndJCnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyw1t3u45vW2TdhKUN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyhokgwcTLHpE0AXIJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzwsTGOlgooGa4VQqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzcvJoUwxqRALFfwJx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]