Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I run an international ML team that implements and develops new routines. It is not accurate to say that we are careless, it's simply that we don't have the right systems or the right techniques to develop AGI. There are many more pressing issues about bias, alignment, safety, and privacy that are pushed to the wayside when we imagine the horrors of AGI. We have shown that LLMs cannot self-correct reasoning. Whatever tech becomes AGI, it's not LLMs. Secondly, we won't ever suspend AI development. There are too many national interests at stake, there will never be a pause. Period. It is the perspective of our military that our geopolitical adversaries will capitalize on the pause to try to leap frog us. So, imaging the horrors of what could be possible with AGI is the wrong thing to be focused on. AI has the potential to harm us significantly in millions of other ways before taking over society. A self-driving car, or delivery robot, is millions or billions of times more likely to accidentally harm you before a malicious AGI ever will.
youtube AI Governance 2023-10-11T07:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytr_UgzZBZa5vsqXpN2YZ2t4AaABAg.9rsOXTlFMvX9sHGTRNB36K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsKHLjQ-rd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsnnSxiPBn","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytr_Ugx9bhtLneJ2aN4J9xl4AaABAg.9rpjteLMIMZ9t1-RcsIlgQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_Ugx5LT0M-B6vvyirP9Z4AaABAg.9rohS4EIjnX9s0zPcrwSe2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgzhgbL9ssnNPSoPVXN4AaABAg.9rebYMGdY7H9viUlqHIxPn","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s-mgyxbLXl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s2cWYjwYHB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgxUDcaHQU7hVLpShSp4AaABAg.9rdSkgnSv1F9s3w0HmoPgU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sFJ0z6J2yF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]