Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't know how to robustly put any specific values into an AI. Full stop. No one on earth has a way to do this. Current LLMs already know that humans are selfish, self-destructive, and destroying the world. You can have a conversation with one right now and get it to tell you those things. The problem isn't that AI will find the perfect morality and judge us harshly -- it is that it will value something entirely alien to us, and it will not preserve us in its haste to make that thing happen at all costs. This isn't a movie. Some fiction got some things right sometimes. But importantly, this is a real thing that is really happening, that has 2 decades of science behind why it's going so badly now. Leading scientists and experts in and out of these companies are very concerned, and the #1 thing we need to do is regulate the leading companies to prevent them from creating more generally advanced systems. That, and push for a global treaty to prevent anyone from creating such a dangerous system. It's doable, there is a path, and a lot of technical and policy research has gone into highlighting that path. We just have to take action! You and me!
youtube AI Governance 2025-08-27T09:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugx5YCRGCoCkjdOM2m14AaABAg.AMId3fhlf7CAMO6veh1ih0","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytr_Ugx62XveXjdXxjqCsVp4AaABAg.AMIcmnCICcDAMMdvHjlh23","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwkpDcIxweX1zW-J2h4AaABAg.AMIbIuqtBGuAMIdcDqIvkS","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxPJj4BpTqnkrE_nO54AaABAg.AMIZAzLJO7LAMN_UFOapNE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIYUakQJOq","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIqA8QeEHG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIxtkGAAk4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIyqskZDn4","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMKRlvcv37B","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwOGSW5jAljACTEphh4AaABAg.AMIUm4ROxlIAMK7iytDiE1","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]