Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find this whole AI bet absolutely insane when you compare it to the development of the nuclear bomb. Back then, the best scientist the US could get their hands on said "there is a none zero chance that we will ignite the atmosphere and end all life on the surface of the earth. We're very sure it's near zero, but it is not zero." The motivation was at first to not let the German Reich be the one to get the nuke first and once they surrendered, the motivation shifted to force the empire of Japan to a quick surrender. Both motivations were to save countless lifes in the own population and the risk of a catastrophic failure was almost infinitely small. But on the AGI bet? What's the end goal that justifies risking our entire civilisation on a really big chance, somewhere between 10-40%? And WHY is everybody is either ok with this or does not know about this?! My theory for how this will end up, btw, is that AGI will never get developed, bc governments are happy with AIs that enables to survey the population efficiently and influence them with propaganda and biased output into religious loyalty. Not all, ofc, but they quickly get caugjt and neutralised. In the end we will be slaves and soldiers for a symbiotic corpo-governemnt locked in an endless global war. Like, get 1984 and Cyberpunk and glue it together with AI.
youtube AI Moral Status 2025-11-08T13:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhcCryWDYmsMZ8Y4R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzO5B20F3tq8vYbJrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxvPFxcUOJ-p6fySU14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwH_31RyMLpfzSAYcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzDlMQ29XUEooqCE994AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnmmD0tC8vkclzrEx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw8Hwo7EX-4Hz0Ey0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyYjaTp52C3OgNY42V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyYovf1Kh45Bje4_QB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzrszZI9g3d_p61ZaN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]