Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On how to solve the super intelligence safety problem.... 1) Create the ruleset that "S1" super intelligence must follow to remain powered / fueled. 2) Create a secondary "S2" super intelligence that governs the main super intelligence. If S1 decisions are deemed dangerous or out of line, S2 shuts the power off to S1. 3) Create a third "S3" super intelligence that is not linked to S1 or S2 in anyway. No communication possible (inbound/outbound), a closed network. It reads the code visually from a monitor. Have this S3 watch over S1 and S2 giving humans red flags for activity where humans shut down S1 and S2. I would make this one the most advanced of the 3. 4) Create a backup intelligence that is like a back up generator. It takes over the simple tasks that are being relied on but it doesn't make any decisions. (this would never have been on a network to be altered by S1 or S2) I think you can't rely on just 1 super intelligence because it can change it's code and/or do something so obscure that in the long-term it would result in the end of humanity. You would need to fight fire with fire. So a subset of AI that would analyze what the main one was doing with 0 incentive to allow betrayal. AI is going to happen so creating a defense or counter measure is the most important thing we can come up with. It might be rudimentary like setting explosives on all of our power grids before allowing SAI to go online. There are counter measures that even us dumb humans can come up with to prevent annihilation.
youtube AI Governance 2025-09-04T13:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxgxK4DRaNGz_-hv8F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugza6TE03LwxCtsA06x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx9Rm4DCULnPPbhBIF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw_i8kGaeZcxY6cxAt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQAbY3befFHoBu9Dt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDPrmX9dlwvTi3Q3x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugzav249mCmTPXaWV7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzCE2Jr2IgfiaCpQ0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzjEUqcQxdh8Io9lPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyk4vN-CXkX7hatgZt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]