Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
about 3 years ago true artifical intelligence was thought to come in decades , n…
ytc_UgxjfqCgU…
G
That's selection bias running the opposite direction from how you're reading it.…
rdc_oi3w7ir
G
If you have to pay attention in a self driving car, than what in hell is the poi…
ytc_UgzqOd-2k…
G
Personally, I think like this: Let AI make art, it's pretty. Let people make art…
ytc_Ugy1Qn4DW…
G
I think...AI will take most of the Jobs in the future...and will force people to…
ytc_UgzOXEb2x…
G
yall ever talk to an ai that’s like a character from a show or movie, then they …
ytc_UgxYxeqBH…
G
yeah turnitin dropped a new ai detector but honestly tools like GPTHuman AI can …
ytc_UgyEmrM8z…
G
I hate AI, but pokémon *is* a gambling game for kids 👀 Very good video, tho. Spe…
ytc_UgylItJuB…
Comment
It’s funny because this is a problem whose solution is WAY easier than the problem itself.
Climate change is a real problem whose solution is difficult within the system we live in. We would literally have to change the way we manage resources.
But the supposed “menace of AI” is a problem whose solution is simply to stop its development. It’s a scarecrow, a smoke bomb, an imaginary fiend, a shadow on the wall.
Boo! There is an existential threat that may drive humanity to extinction, and it is AI! It may destroy all of us… okay then, stop pouring trillions of dollars into it.
We have a REAL existential threat that may drive our species to extinction in the next decades or centuries, and it is a problem that is genuinely hard to tackle. But it goes directly against the interests of the system, so instead we talk about this fake problem that we are relentlessly funding every single day.
Man, this makes me mad. It’s like being afraid of dying from cancer while spending all your money on cigarettes and processed meat.
youtube
AI Moral Status
2026-01-05T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_OJ_p45jxXgt-2D14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPMo-3m2TPWh9SFEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyvYuE-9tPhCkRp0P94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzixhn74VQqUmGfHCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpIQRvcrnfJFBJ1Kl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQ-rBbpNOLqvmeVpB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkxPv7k3fvH1-0gXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWWkk3LpLZY6T865l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx0rcjN3iZpHC0zssZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyclQvkMKbOgOpjWVt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]