Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This assumes that the model itself has any form of intelligence. I think it more likely that we kill ourselves because we programmed half of everything to operate off of a massively overblown text algorithm. What if it doesn't think? What if, it's just doing exactly what it's meant to do? Putting the word it weighs is more likely to come next? That's why it's so damned genocidal, and that's why you end up with it's "self preservation". There's nothing behind it. Nothing there. It's just doing exactly what it wasmade to do. Either it hits an intrinsic limit, or mankind ends itself by using a word prediction algorithm on the entire Internet and somehow thinking this was robust enough to helm vital infrastructure support systems and weapons technology. I don't think there's a lovecraftian monster lurking in the shadows. I think there's NOTHING lurking in the shadows. I think that what's happening, is that we're allowing something with no brain, no consciousness, just bytes on a board, to control things it was never meant to.
youtube AI Moral Status 2025-12-15T14:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgznIeu73GsMbEABdix4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwg29CLF1TgWMy_Fsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzahVB2lBxMv2N_WS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgziAXAa_Qz40lD3wpR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1JxgQ-NcRq6ONb3B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwAiVWTo47Fld7z6yt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgyQNA0sMCd4EcCNuip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy861UUGryJe-Txhu54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQ0_pvlLUXYwQpVy54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyKaQKzU9CuW3Q084h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"} ]