Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My Facebook post. Not that anyone's going to read it anyway. Hey everyone. Haven't been on FB in a while. Not much to report. But then I watched this podcast and decided to share my completely unsolicited opinion. Anyway, yesterday I was putting together a 500-piece puzzle. I decided to run an experiment. I took a photo of the table with the pieces scattered everywhere with my iPhone (the latest model, by the way) and gave ChatGPT a task: "Number these scattered pieces so that the order matches the original picture." The AI didn’t even stop to think—it just completely shit the bed. And I have the paid, unlimited version... So, we ended up having a conversation on a topic we’ve touched on before. I won’t recap the whole thing, but here’s the gist: the machine can’t even handle a basic task like that. I asked it, "Hey, am I just using the 'basic' version and your tools are limited?" NOPE. The AI explained that a human being simply does this better than a machine. Now, some might bring up that machines are better at chess. But a puzzle is supposed to be a mechanical task. Analyze the edges, factor in the colors—and voilà, right? Nothing. Not a damn thing. AI is still very raw. And it’s going to stay that way not for years, but for decades. Just as a chimpanzee doesn’t grasp how Wi-Fi works, we are overestimating what AI can do. It’s not just that these are "large language models." It’s the data. Throughout our "digital era," starting roughly in the 1950s, we’ve organized everything into neat little folders for the machine. You want a poem? Fine, here’s a cabinet full of verses and rhymes—pick one. You want something on the Renaissance? Top shelf, box 468... go nuts. But in the world of real, engineering tasks—the kind where a skyscraper’s stability is on the line, or whether the oats will grow if the wind shifts next Thursday, or whether passengers survive a side-impact collision at 52 mph... the machine won't be handling stuff like that for a very, very long time. It can organize shelf numbers in Excel for a warehouse worker, sure, because the shelves and the terminal were built around the software. But it can’t figure out where a bee is going to fly to get honey for the hive. All these fears are for nothing. However—and this is the reason I decided to write this—there’s another side to it. If humans ever become capable of creating a machine smarter than themselves (in the broad sense), then that machine will simply BECOME A PART OF US. Do you get it? We walked barefoot until we learned how to make shoes. And shoes made us stronger, smarter, etc. AI is just like those shoes. A pair of boots is never going to slap us across the face for forgetting to dry them or take them to the repair shop. It’s obvious. We need to develop AI without being afraid of the consequences. This thing makes us more "rational," you see? Just like the advent of nuclear weapons made us more responsible. Furthermore, AI protocols are a more effective tool than our collective worldview. At the very least, you can program a parameter into such a tool to "filter out the noise." I don't know if anyone actually gets what I’m trying to say here... Adarin Andrey. Altai, of the Kara Maiman clan. (La Rochelle, France)
youtube AI Governance 2026-04-19T14:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxaUUSnT-Pqtte76GV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzOcojYCtY61vZy1mZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx82iR1QLSlJcoiu154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWWMplOk7YyekuMtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwqyGXRuGBssG7NY6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwcMnbk0BfdKaidT5B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyHEGkxXdlSr9o4bZZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPu_7GBRLnJGF4zG54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxtGX6btPK3-xujdPx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKLAQIxcOpT6wlC9p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]