Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The crap being passed off as A.I. is not intelligent, it cannot learn, nor can it discern facts from falsehoods. It's just a fast automated machine that follows the rules it's been programmed to use, can quickly search through all the data it's been provided (books, articles, websites), and then spit out a response in plain language. But it does not think. It can't think. It's just faster at finding the exact same crap on the interwebz that any of us can find doing a series of searches. The junk being called A.I. today is just a collection of algorithms, and algorithms are bias embedded in code. FFS?! Have ya dealt with a chatbot?? They're annoying & inept morons! A.I. my arse. Given the rules of English grammar and composition, it can edit, correct and revise text, and even give it flare. Given the rules of various programming languages, it can edit, correct, revise and improve code, and it can take plain language requests to generate code from scratch. That does not mean it is intelligent, it's just a tool. If then, this or that. Josh Hawley grills John Miller (Sr VP PTDT & General Counsil at ITIC) https://youtu.be/vGjYuvWcSjE?si=8iTQE8ti1U0-85_A
youtube 2024-03-31T06:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzEGMFXZhLNaD1-4BN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxAtsKNaD2s6fwaSPF4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_SK84NbH3S8eA6GN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwux3WNOMZ69bhSrFx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxlNvjGB6xA0h0rE754AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZX-ZrM6bVhHqvbCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx3ss3DWn4cPO5RSHN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwRXW3wm65cpzlFTZ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0FXQAgzsxgq8Yz2B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgykeaE21rwGWZSqIuh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]