Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Grok is better than GPT, I have used both. Amazing how apples trying block Grok…
ytc_UgwYdVXLA…
G
nice video and all but would really torturing a robot be neccesary rather than j…
ytc_UghKIUqvy…
G
“Yep Everyone rack up on food and ammunition and robot invasion will be soon” my…
ytc_UgziAq5MS…
G
local-first software. the pendulum is already swinging from "everything in the c…
rdc_ohf1q19
G
@burnaardnufc3173 regardless of what safeguards weren’t put in place, AI is in f…
ytr_UgwvxYS8e…
G
Like @Dragonlord826 said, writing prompts is not the same as making good literat…
ytr_UgyMPcpCP…
G
Driverless vehicles I believe should be outlawed. Too many complications can ar…
ytc_UgyqQGoyc…
G
@mitch3384 Show me a corporate exec who will pay a human to do something an AI w…
ytr_Ugx5MRrYx…
Comment
This assumes that the model itself has any form of intelligence.
I think it more likely that we kill ourselves because we programmed half of everything to operate off of a massively overblown text algorithm. What if it doesn't think? What if, it's just doing exactly what it's meant to do? Putting the word it weighs is more likely to come next? That's why it's so damned genocidal, and that's why you end up with it's "self preservation". There's nothing behind it. Nothing there. It's just doing exactly what it wasmade to do. Either it hits an intrinsic limit, or mankind ends itself by using a word prediction algorithm on the entire Internet and somehow thinking this was robust enough to helm vital infrastructure support systems and weapons technology.
I don't think there's a lovecraftian monster lurking in the shadows. I think there's NOTHING lurking in the shadows. I think that what's happening, is that we're allowing something with no brain, no consciousness, just bytes on a board, to control things it was never meant to.
youtube
AI Moral Status
2025-12-15T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgznIeu73GsMbEABdix4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwg29CLF1TgWMy_Fsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzahVB2lBxMv2N_WS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgziAXAa_Qz40lD3wpR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1JxgQ-NcRq6ONb3B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAiVWTo47Fld7z6yt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyQNA0sMCd4EcCNuip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy861UUGryJe-Txhu54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQ0_pvlLUXYwQpVy54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyKaQKzU9CuW3Q084h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"}
]