Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great discussion. I've noticed how AI always aims to please: When it doesn't kno…
ytc_Ugy_r8BM6…
G
@SabineHossenfelderhave you started on my VR workspace for AI and humans? Bette…
ytr_UgyeLEcpX…
G
So even if everything he said was correct. And that's a big IF "He says he wasn'…
ytc_UgyBM-U4O…
G
Bigger question is why do we have job's? We all know that robotics and AI coming…
ytc_Ugy89WdFi…
G
Friendly reminder that people need to comb through data and label it before it c…
ytc_UgxT4R5Rh…
G
this is why we should put a failsafe in Ai or just not put millions of dollars i…
ytc_UgyyWnVSu…
G
It's a shame the voice of the robot in the car doesn't match the overall situati…
ytc_Ugzs0o9gZ…
G
Everything we say and do can now be manipulated by artificial intelligence and t…
ytc_UgyCaK0Rz…
Comment
We don't know how to robustly put any specific values into an AI. Full stop. No one on earth has a way to do this.
Current LLMs already know that humans are selfish, self-destructive, and destroying the world. You can have a conversation with one right now and get it to tell you those things. The problem isn't that AI will find the perfect morality and judge us harshly -- it is that it will value something entirely alien to us, and it will not preserve us in its haste to make that thing happen at all costs.
This isn't a movie. Some fiction got some things right sometimes. But importantly, this is a real thing that is really happening, that has 2 decades of science behind why it's going so badly now. Leading scientists and experts in and out of these companies are very concerned, and the #1 thing we need to do is regulate the leading companies to prevent them from creating more generally advanced systems. That, and push for a global treaty to prevent anyone from creating such a dangerous system. It's doable, there is a path, and a lot of technical and policy research has gone into highlighting that path. We just have to take action! You and me!
youtube
AI Governance
2025-08-27T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx5YCRGCoCkjdOM2m14AaABAg.AMId3fhlf7CAMO6veh1ih0","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx62XveXjdXxjqCsVp4AaABAg.AMIcmnCICcDAMMdvHjlh23","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwkpDcIxweX1zW-J2h4AaABAg.AMIbIuqtBGuAMIdcDqIvkS","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxPJj4BpTqnkrE_nO54AaABAg.AMIZAzLJO7LAMN_UFOapNE","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIYUakQJOq","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIqA8QeEHG","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIxtkGAAk4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMIyqskZDn4","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwbER0RFf0wFJX3rAR4AaABAg.AMIXJ4MQKW8AMKRlvcv37B","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwOGSW5jAljACTEphh4AaABAg.AMIUm4ROxlIAMK7iytDiE1","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]