Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I didn’t think people were actually brain dead enough to have this argument the …
ytc_UgwT1Gpy5…
G
The first latest technology now dangerous to humans. Garageheads trying to start…
ytc_UgznxlPV2…
G
AI is a tool.. just like any other tool people look up for various medical or ot…
ytc_Ugxu9LMz7…
G
As long as I have Undetectable AI, my txt from chatgpt is safe from ai detector…
ytc_Ugyw-xn8x…
G
@thewannabecritic7490 You haven't proven the law is broken. You keep repeating, …
ytr_Ugx-THrMG…
G
Without the aspect of training on images who's creators didn't consent to having…
ytc_UgyGitfxW…
G
1. Intellectual properties should forbid selling their images with AI.
2. AI sh…
ytc_UgwvNI8JZ…
G
Let's be honest these are just algorithms. Weak AI. Shit will get much more weir…
ytc_UgxnG919x…
Comment
He mentioned humans living forever after altering our genome for it.
Imagine if it led to us using Ai like he said, to adjust that genome, which accidentally creates humans who basically become zombies of some sort, because we didn't create "undying humans" properly.
Which further spirals into us assigning Ai to try to undo the genome adjustment made in all humans, but Ai decides that the best way to do that is to exterminate all humans, thus leading to us having a 3 way war against "zombies who live forever" and "Ai who exterminate all humans with adjusted genomes"
I'm so high rn the entire idea basically results in us putting ourselves into one of those apocalyptic movies where society is barely surviving against the world we created
youtube
AI Governance
2025-11-29T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx_4_WVTOtoyi3Yt854AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxBufL1kr4Eh5i33294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxmODGjBG_zEVwS3C94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxl1XVLTt3vQp8XhHh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwQjR4aUFHe8v7LBGt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzT-ch3Nm_-lnXGDK94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx9DwqEj54KG6bdNVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwR2zHd-3K9mkJN7Gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxaEQYDF6i_HvjaQ314AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3JM3RsLF5CED4Jr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]