Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As it goes for simulation theory and AI there is a better viewpoint. If we have the ability for unlimited incredibly cost effective world simulations using a quantum computer. And we populate this simulated world with autonomous beings. What would be the purpose and what would be the problems. It seems highly logical that the problem would be that autonomous beings making egocentric decisions without care for others ALWAYS culminates in a TERMINAL world simulations scenario. Does that sound eerily like what this podcast is discussing..? So from this PROBLEM we can deduce the purpose. All of us within this world/simulation is here to LEARN about the impact of our choices upon others and ultimately upon ourselves as these things are always circular ultimately. Which inescapably is moral or has religious underpinnings. Ergo.. it is our choices. We are here in this sandpit we call life to learn and grow from our choices. Seems like this point has been debated and resolved a million times over by every sophisticated culture only for everyone to collectively shrug their shoulders and resume the work of engaging in this exceedingly persistent life we live. My suggestion is to resolve that game theory and unconditional love for everyone is in everyone's best interest. Yes we are likely in constant struggle to achieve it reliably... But their is no better course of action. It is our best choice for the here and now and for the after...
youtube AI Governance 2025-12-08T10:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwkx4_FlQyJMnyjE4h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyX70Jd4GGYdOJCjhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwPv9hy2fIqIaMrIOh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBeJllSRorxCzq4td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx7yy7TdFEJeYIRaOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzXevjeaXBRxOUZ6cB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzTB_VlNwsKp-JtAKl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGb6cNlksMK1P8zQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxMK5OXLCvu4MlZ2Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx6ZESh67yX0rHT4YB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]