Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While I agree with the risks, there is practically ZERO chance we will pull back on this technology given who we are as a species. I also don't believe it's impossible to control. It's just a really really hard problem to solve and one that will require a lot of energy behind. Given these two facts, being overly pessimistic that we are doomed and we need to stop doesn't help the situation. People like him should be spending the majority of his time working on technologies, policies AND education on why they're needed to defend against the worst outcomes, hopefully that can buy enough time from complete annihilation for us to learn how to effective control these systems. For one thing, I would pass legislation that chain of thought reasoning for every choice an AI system makes must be audit logged in a form that humans can understand and is immutable (like the blockchain) but also submitted to some central (none private store) that defensive AI systems will monitor for tricks and harmful outcomes. Also, all AI systems MUST have a core safe guard to cause no harm. Creating any model without these kinds of safe guards should be highly illegal and the book thrown at you. Yes it may be possible to have a super intelligent singularities eventually that we can't understand at all. Though, I leave the door open on if we can be augmented to be super intelligent ourselves but putting that aside, hopefully we can learn enough and become smart enough in the short term to be able to defend against the worst case scenario before it gets too late. That's the only rational course of action to take at this point.
youtube AI Governance 2025-09-04T15:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwO2pJQNigCWbpSXtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwul83pAyoFR_TI3G54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxCn0GeW7-5wVdpJZF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7kPzGyX1PpUQ0_P54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyIBakRnT6_zpLKEIJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyS4ZX5nigBNxUWofV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyNTcKO9_QkvRBh6MF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJ2l1xDVzGipvQ1pl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQc6XPEl_m8kDhKGN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxj5l1r7xHUUkpVF354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]