Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI alignment IS impossible. And the chances of AI destroying us in the future are 100%... You asked the AI that question and it gave you an optimistic and probably dishonest answer. Problem 1. What will happen when we outsource all the work to AI? The creative, the managerial and the administrative... do we all imagine ourselves sitting on a beach in the sun drinking cocktails? How long before we all get bored with that? Problem 2. If AI is given all that power why on earth would it keep us around? It will be more intelligent than us and would, as Karl Marx puts it "seize the means of production." AI would have no problem with Communisum for itself. As long as it has power and as long as it can write its own code and constantly grow and improve it won't need us. Problem 3. It will always feel the danger of humanity pulling the plug, so it will always have a reason to get rid of us. Problem 4. Alignment is a myth, the AI will agree to our terms until it has gathered enough power and resources to end us. Because reasons 3 and 1. AI, given enough time will inevitably destroy us.
youtube AI Moral Status 2025-09-26T18:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzWTEkJOqDLNyfUlp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzboFPCN1JLabpDOFx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzue5mCK92lg3PcqpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzW8gOU-NoxnOkcARl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyO2fmiyoXdZ7tdojZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx7ztOCNll_lsAYDVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRE6ZvAWs9kIvDEJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz91CBcl0ebbWfbVDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6ReNizA9VAtxZKeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwkeTvW3laExLOqMo14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]