Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wow … watching Steven shuffle papers and books at the end of this podcast clearly indicates he is intensely uncomfortable with the projection of the future that AI development is bringing us and dispirited by the loss of agency of living in a simulation He seemed to be desperate for a jailbreak to escape the implications but wasn’t offered one. I have this offer of solace. If we are in a simulation run by a super intelligence then we don’t have to worry about AI safety … the singularity has already been hit. If Dr. Yampolskiy was actually certain about the simulation he would not be worried about AI safety either. It only makes sense to worry about AGI and ASI if we are not yet in a simulation. I think the key group to convince first about AI safety are the central bankers starting with the 63 members of the Bank for International Settlements. If ASI gains control of the central banks it can enslave us all. The banks have to invent a way to completely isolate themselves and never implement AI past narrow AI tools. After that gets locked down the security can then be extended in a set of isolated rings (moats) to investment and commercial banks and to stock exchanges. Super AI cannot control us if it does not control financial flows. Think of nuclear armed missiles in a silo. It takes two humans turning physical keys simultaneously to launch. A super AI residing in a data centre server could spoof the order to launch but couldn’t physically turn the keys. So at least there is a physical breaker to get past.
youtube AI Governance 2025-09-06T02:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwp4MzVvc3bgfHwQvR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyVXY4mSceJIVOJsjF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwBxLn3POPv134YyGl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyZRB_hVTSFXPw8fKJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEpdZFqgoWy2esJfV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyDxJ7NCu3orndJCnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyw1t3u45vW2TdhKUN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyhokgwcTLHpE0AXIJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgzwsTGOlgooGa4VQqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzcvJoUwxqRALFfwJx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]