Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Don't bet against AI" Yes, I say, if you assume an infinite amount of resources and an infinite amount of time, then maybe (maybe) you will be right. Your assumption is reaching into the unknown-unknown. My experience is that all of the things you list as accomplishments are only very simple accomplishments (the non-thinking habits that all humans exhibit); such as those accomplished by assembly line workers for Henry Ford, i.e AI simulations (LLM) . Ford smart,. Ford converted thinking humans into non-thinking machines. Today we are replacing those workers with non-thinking machines. Business and AI proponents call this progress! For today's AI, from my perspective, there are two types of AI: (1) clustering relation identification (2) generative trained LLMs Some AI tools are good because they rely merely upon clustering of data.That is by classifying new data into similar clusters. (reading medical data or data as new members of a cluster field) Second. generating language sentences (via predictive next word generation) is slightly more complex type of AI. "Generative trained AI" is no different than winning a game of Chess or GO (using Alpha-beta path reduction/selection) with positive feedback training, that allows the tuning of neural networks. What this game winning accomplishment shows is that these games are no more than a generative AI reasoning mazes. The true revealing limitations of Watson, came in one of the last Jeopardy questions: "A woman's clothes similar to a key on a computer keyboard." Up until then all of the answers were a simple large database lookup (like who is buried in Grants tomb). It was not until generative training that AI could recognize clustering combinations to be directed to a different tool such as a mathematical (agent) calculator that it could produce mathematical calculations. It has finally been able to identify a sequence like "4 + 5" and identify that sequence as something to be given to a calculator mechanism to produce a reduced answer. The calculator is a non-thinking mechanism, so no increase in the level of intelligence. Agents are a step that allows for more mechanical results (but again in a non-thinking type of result). All of the limits you identified are the real keys to AI. There is a hierarchy in the limits and relationships among all of these limits. The simplest (least demanding) may be accomplished to some degree within this century. The highest levels ("TRUE" AGI and ASI ) may remain eons away Sorry. I am not impressed. My assumption is "humans are tools makers. We only make tools. The only thing we humans produce that is not a tool is another human being. "Some people are capable of using a human as a tool". Throughout history humans have been used as tools. And unfortunately humans will continue to do this. True AI is about "the knowledge of good and evil", As in "The Lady of Shallot", the curse is upon us. (Will AI be capable of creating/combing a similar (unrelated) thought such that between the "Bible" and "The Lady of Shallot" relationship? My answer: Not for a very long time.)
youtube AI Responsibility 2025-10-08T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw6nia84y7t65s1IK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwetN_J-bOhbvQ4AZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2gU0N5Dw-auyxiqp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYXf3CX96H63RqqLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzuSj4A4OdQyojcwhF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxJQOxZEcS9YCSRwXF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyiP4eJ62vMkZ8jwOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwAXm2Ng6LNYsjTgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxGrYIX5b02__DNTK14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6WtzleY_3Ez_qvY54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"} ]