Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly I have no hate towards ai but that ai people say we have power and arti…
ytc_Ugy-CiaBL…
G
Speaking of Minecraft: servers are now overrun with mapart made from AI generate…
ytc_Ugyubd6cC…
G
I wonder if training the AI on bad code trained it to be bad at everything, henc…
ytc_Ugx2U2Pte…
G
if i see a dali painting in a.i i know its stolen becuse dali has value in the a…
ytc_Ugy9bdhZd…
G
As far as I'm converned people who make Ai art aren't artists but Commissioners.…
ytc_Ugw0pJzHi…
G
Lauren Lopez the AI will be used to do the high paying jobs so that the owners o…
ytr_UgztNF_84…
G
A lot of people don't quite understand though that AI doesn't actually think yet…
ytc_Ugzxp-g4W…
G
none of the companies i have seen that said they are laying off staff for AI hav…
ytc_UgwexFHST…
Comment
That's the central danger of real, truly intelligent AI and its weaker variants in a nutshell: There's almost always a difference in what humans tell the AI we want and what we actually want, which other humans would understand by implication and context and the AI doesn't get.
The programmer wanted the AI to play tetris as well as possible but actually asked the AI to try get the highest score it could in tetris without losing, which is NOT the same thing. So the AI got a score as high as possible and then, just before it knew it would lose, it paused so it would never lose. It did exactly what it was asked, but not what its creator wanted.
This is how you make a robot with an AI to make you coffee, and then it makes you coffee, however it squishes your pet sleeping before the coffee pot to death, because whether your pet lives or dies is irrelevant to getting you coffee but walking around the pet might delay your coffee by seconds.
Misaligned AI is dangerous not because it will decide to kill humanity, but because it will absolutely decide to kill off every edible plant species on earth in order to achieve a different middle goal on the way to doing exactly what we asked it to do, regardless of how this will affect human life. It won't want to kill humanity, it just might not care whether it does so or not while ticking off a bullet point on its side sub-sub-quest of its side quest to fulfilling its main goal more effectively.
Malice-free malicious compliance leading to human extinction is more likely than Skynet, but the end result might be similar. The only way to avoid it, is to explain exactly what humans want and don't want and do so unambiguously in mathematical notation, and include that context as a complete representation of human morality defined mathematically as part of every order.
youtube
AI Moral Status
2023-12-22T21:5…
♥ 87
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytr_UgyFKmvlrGtJMqWIiSZ4AaABAg.9ywMWR4f-9u9yxexX3oe74","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugypkbb0H82IknwM4KV4AaABAg.9yjUOhdVcrwA3yZhblkvza","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugypkbb0H82IknwM4KV4AaABAg.9yjUOhdVcrwA58qRfppME7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugzud_FfW9xFOpDWQ0h4AaABAg.9yZn05BjQCU9yfykxMyFeq","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgxIvSPOVsvxuh9MRpF4AaABAg.9yZ6riKYIHI9yd8A5cWOvS","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytr_UgwCUCupDZDySnBhtFF4AaABAg.9yX6AuXwEQA9ydTSeOacTW","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytr_Ugyl0T_IfRd7T2fZait4AaABAg.9yVr7rPIf3I9ybhxXjDL9R","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgyM5A_0Segu0BAqq9R4AaABAg.9yRzJzj1R4k9y_W_gF3cUx","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytr_UgydhzGwTGgCSC57bDx4AaABAg.9yPYuGFl8Wi9yW1YL3Dpn2","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytr_UgxoHLPOqUB2dUcUpbN4AaABAg.9yPFrrUplCf9y_kCCBI-Dh","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]