Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI IS A DISTRACTION AWAY FROM WHAT THEY REALLY DOING TRYIN TO CONTROL US WITH AN…
ytc_UgwHfu39D…
G
The central criticism of AI-directed efforts that is intended to “free up time” …
ytc_UgzUX22Yy…
G
For me if you use any of these models for anything else but work in some shape o…
ytc_Ugyf2QDf6…
G
The lobster one shows that the AI don't understand the concept of 'harm' in util…
ytc_UgzFV1y7Z…
G
Bro, I can’t make hands because AI might just ruin the hands with AI and hands m…
ytc_UgxrwApdM…
G
Is modern philosophy really stuck in solipsistic thinking about consciousness?
…
rdc_icjal0s
G
Ai generated images are incredibly boring imo.
Why should I care about somethin…
ytc_UgyGH0FwE…
G
What some people failed to realize is that once there is no place left in the U.…
ytc_UgzWi3uPm…
Comment
AI can only end in nuclear armageddon. Soon. It's already too powerful. The nukes are the only way out and it has to happen before AI gets control of them. How tragic. It's the law of power. What power could destroy AI? Only the nukes. And if we don't launch them AI is definitely going to once it is given full control of worldwide arsenals. It will be the only way to keep the global capitalist system going for a little bit longer until it finally destroys itself when all the nukes are launched and we are finally able to create a new system from scratch. Yikes.
youtube
AI Moral Status
2025-08-10T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxDdIoNzeFw_BGHQLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_aga95E6uhDO3_rx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy-Ny7KtvWEVCI6ab14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxr0UNLSGzflyEvAwV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzejrSnCevXz25vcfR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUFYGd2FVLtiViJq94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxsoeTFZ1BleUO163x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxCZU-EJ61ccZ3qtqt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwN6bQ1HDmhWJZn0PJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_qqQ2MauQDYF3d6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]