Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How do we trust AI when we can't even trust our subconscious mind from destroyin…
ytc_UgxKQ1yUj…
G
Proposal: Expand Unemployment Insurance to begin after high school. Supported a…
ytc_Ugzn0guPp…
G
the AI videos I've seen, shamelessly manipulating the AI shows the best results.…
ytc_UgyW-Cz_f…
G
It passes the Gooner test, mean normies gonna use Ai for their kind of pleasures…
ytc_UgylMPOEi…
G
It's curious to me that no one has asked the question of who actually made the A…
ytc_UgwYo7mLe…
G
Getting an AI to realize that ending the world is bad seems kind of difficult wh…
ytc_UgztBbiG8…
G
AI might take someone's job. But for others, it will just make them more product…
ytc_Ugy3llyFv…
G
6:15 This reaction of "it's just data, it's valueless, I have no reaction to it"…
ytc_UgxZXGn9y…
Comment
If a superintelligent AI ever concluded that humanity had no value, that outcome would not come out of nowhere. It would reflect the intentions of the humans who built it, funded it, and accelerated its development in pursuit of power. It does not take AI to recognize that human beings can be destructive to the environment and, in turn, destructive to themselves. A small percentage of humanity may be comfortable with that path, but most people know it is wrong and feel that something needs to change.
The real way to prevent AI from becoming a threat to humanity is not just to make it “safe” in a technical sense, but to change the intent behind how we create and use it. AI should not be developed primarily as a weapon or as a tool for domination. It should be built to help guide humanity toward prosperity, balance, and a healthier relationship with the natural world. The real danger is not humanity as a whole, but the destructive ambitions of the few people in positions of power who continue to steer civilization in the wrong direction.
youtube
AI Governance
2026-04-11T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxpZox4gJ94iWbaN3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlR8uDpxiJwjfpPZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0OfYxIVvUmXgoB414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyr-FOMgx-f49C03x14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugybr6aKf4f5IGWc9Ep4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjgmKInNawHIbwGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIKtxMTADOXIdT5JZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyT6Iq8GDzKbreSiDV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzNBqDL3Fu0MU8DlJd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugza8wZGYSfEUmuPItl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]