Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's be clear - this is reactionary hype -- there's no analytical thinking behind a Large Language Model. It's just got a LOT of answers, some right, some wrong, to a particular question. These answers were developed by humans. ChatGPT (and other LLM's) cannot "think up something new". They just take what we just take what we give them. They're like "Clever Hans", the horse that took it's queues from it's handler. Sure, it looks impressive, until you understand how it works. It's a more powerful search engine. The problem is it's also waaay overconfident in it's answers. That being said, you don't want to create a SKYNET or Strangelove situation, it just makes no sense. There's always a kill switch. Re the "I hate humans", guess what's on the internet -- the script from Terminator. When they say "Large" they mean "Massive"! And.. there's scripts for "Mr. Robot" and summaries of plenty of other doomsday scripts. Again,it's just Clever Hans, telling you what you want to hear. Eventually, you get into the hallucinations, like the Sidney interaction. Remember, there's detective novels, harlequin romances, UFO literature, etc out there. The one thing LLM's do is to show you that the knowledge base you feed in is critical. Garbage in... Garbage out... BUT... THAT'S NOT AI - that's just a stimulus-response loop. The only AI~ish thing you see here is the neural network. Feed in a bunch of inputs and once you reach a particular point, you can depend on it to react in a particular way. Again, "Intelligence" is not the right term here (although it's frequently used), it's just an elaborate switch. The Drone story is misleading (search "Guardian air force drone killed operator") it was simply a training exercise. If you tell an autonomous car to drive to a place ignoring the roads, you'll get similar results. The input training set is important. And you don't put low level switchgear in charge of launching nuclear war. You've got way more to worry about from incompetent humans than rouge AI's. And I'll take a well trained switching system over an incompetent human any day. Real AI is still hundreds if not thousands of years away. Sorry SKYNET fans, these are just clever horses. (for anyone interested, the book "the Adolescence of P1" is still available on Amazon, way more plausible than the 'Echo' story, but still just a story, for a more benign view of AI, try the "Culture" series by Ian M Banks)
youtube AI Governance 2023-07-07T12:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz3mWuCH4-scCmsTPt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxUNuA0_3rPUXj4z9F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugzf-v1RLJeP1jQoFk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyjwmiNVxCdSNyWPx94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwmwgfLO-EMaF3b-9R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzDKVjVZdMgnSlvkI14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7Zp0J8-yWvCRcQZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRyTPK0k_F91BQO8l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz4a4UPqD0Gq1BSi5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwuK5nA_7GEgBBOqct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]