Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can’t think of a single interaction I’ve had with anything that labeled itself as an AI recently that wasn’t disastrously deluded, misguided, misinformed, incoherent, inhuman, inhumane, misleading, illogical, inane, cringy, creepy, ideologically destructive or useless or a complete and total fabrication. You literally cannot ask these things a question and get an answer that doesn’t lead to some form of gaslighting or completely misguided use of language and facts. It often only seems to take milliseconds to find multiple incongruities and issues with how the larger ones seem to behave. Go and ask google some questions and see what happens. Nothing about it resembles any form of intelligence I’ve ever heard of. It does seem to be destructive to intelligence, which I suppose is potentially a kind of intelligence that it seems to readily faceplant into at any given opportunity, but the same destruction could be done with words or bombs. My rule of thumb is that it takes an entire planet to move in the right direction and we face imminent global devastation in more forms than we can contend without the addition of more wars than the globe has ever seen and genocide and plundering of resources and dismantling of all solutions related to stopping every form of human, social and ecological disaster. Even just building that many robots would cause enough destruction to significantly impact global disasters from ecology to warfare to economy to total social upheaval. We’re already there though, so why not race to the finish like a bunch of lunatic drug addled muppets. We’re better than this. Better than them, anyway. Absolute joke.
youtube AI Jobs 2025-10-08T04:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzV7UJdFSmd1DWukWl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxc106Nbi8iLwgGBeV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugy_60bky7M7qAW-j3x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},{"id":"ytc_UgzG6bd267pEwBZL2Q94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxrKPISHuxDeJkexrh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyrIiA0egQg8eibma54AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwGvvVAwNDU--XWHhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugz14O8ga3VYFuXBUFt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugxzudi04yq8jfezzhJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzhkevzaMJCKoC9hUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]