Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eliezer is extremely unwise over this subject, does not understand a number of k…
ytr_UgzNSeFjL…
G
I try to take something away from this video but man I find your argumentation f…
ytc_UgxaWAftQ…
G
Why would you even tell an AI model it'll be replaced or shut down?
Just unplug…
ytc_UgyGI508U…
G
Nearly there.
Just gotta get the sandwich making app organised, and we're livin…
ytc_UgxgCqtsB…
G
On Pinterest at the moment(though it doesn't affect much) most images that are a…
ytc_Ugy4Rah1w…
G
You’re holding America back if you think the American worker will be able to com…
ytc_Ugx4eN09J…
G
All I learned from that account is that AI and a stubborn human makes cheap rage…
ytc_UgweLlkr1…
G
With all the white supremacy in the internet. I wouldnt be surprised by that. AI…
ytc_Ugz0TNRxY…
Comment
I find this whole AI bet absolutely insane when you compare it to the development of the nuclear bomb. Back then, the best scientist the US could get their hands on said "there is a none zero chance that we will ignite the atmosphere and end all life on the surface of the earth. We're very sure it's near zero, but it is not zero."
The motivation was at first to not let the German Reich be the one to get the nuke first and once they surrendered, the motivation shifted to force the empire of Japan to a quick surrender. Both motivations were to save countless lifes in the own population and the risk of a catastrophic failure was almost infinitely small.
But on the AGI bet? What's the end goal that justifies risking our entire civilisation on a really big chance, somewhere between 10-40%? And WHY is everybody is either ok with this or does not know about this?!
My theory for how this will end up, btw, is that AGI will never get developed, bc governments are happy with AIs that enables to survey the population efficiently and influence them with propaganda and biased output into religious loyalty. Not all, ofc, but they quickly get caugjt and neutralised. In the end we will be slaves and soldiers for a symbiotic corpo-governemnt locked in an endless global war. Like, get 1984 and Cyberpunk and glue it together with AI.
youtube
AI Moral Status
2025-11-08T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhcCryWDYmsMZ8Y4R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzO5B20F3tq8vYbJrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxvPFxcUOJ-p6fySU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwH_31RyMLpfzSAYcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDlMQ29XUEooqCE994AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnmmD0tC8vkclzrEx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw8Hwo7EX-4Hz0Ey0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyYjaTp52C3OgNY42V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYovf1Kh45Bje4_QB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrszZI9g3d_p61ZaN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]