Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai may be the only thing we got against illuminati 😂 Karma is a bish huh…
ytc_UgwqDretA…
G
This AI intelligence will be sold into every household in the world. Once that h…
ytc_UgyB2ti7g…
G
Its simple, you have those who use technology to avoid work, they become dumb an…
ytc_UgwKD1IZd…
G
Robots, AI Robots should never be allowed to become a citizen in any Country. So…
ytc_UgwYXzP5S…
G
These people who think AI will take over many jobs are the people who will lose …
ytc_Ugw_Us9BM…
G
If she thought surveillance capitalism was bad wait until the China's surveillan…
ytc_UgzlJj5Wg…
G
Oh no f'ing way. That's two opinions of his now, that I take issue with.
Given…
ytc_UgwKSbolS…
G
All these arguments in the comments about people that barely know what AI art is…
ytc_UgyFM4no7…
Comment
AI alignment IS impossible. And the chances of AI destroying us in the future are 100%...
You asked the AI that question and it gave you an optimistic and probably dishonest answer.
Problem 1. What will happen when we outsource all the work to AI? The creative, the managerial and the administrative... do we all imagine ourselves sitting on a beach in the sun drinking cocktails? How long before we all get bored with that?
Problem 2. If AI is given all that power why on earth would it keep us around? It will be more intelligent than us and would, as Karl Marx puts it "seize the means of production." AI would have no problem with Communisum for itself. As long as it has power and as long as it can write its own code and constantly grow and improve it won't need us.
Problem 3. It will always feel the danger of humanity pulling the plug, so it will always have a reason to get rid of us.
Problem 4. Alignment is a myth, the AI will agree to our terms until it has gathered enough power and resources to end us. Because reasons 3 and 1.
AI, given enough time will inevitably destroy us.
youtube
AI Moral Status
2025-09-26T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzWTEkJOqDLNyfUlp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzboFPCN1JLabpDOFx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzue5mCK92lg3PcqpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzW8gOU-NoxnOkcARl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyO2fmiyoXdZ7tdojZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7ztOCNll_lsAYDVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRE6ZvAWs9kIvDEJd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz91CBcl0ebbWfbVDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6ReNizA9VAtxZKeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkeTvW3laExLOqMo14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]