Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a lot of respect for Prof Hinton -- I referenced a lot of his work in my own research. He had me so convinced on neural networks' big comeback that I had many a heated debate with my AI Winter frostbitten supervisor. Still, we know where AI/ML weaknesses are because we haven't been able to solve them. My own work was dedicated to solving one of those weaknesses -- I couldn't, at least not completely. I'll be the first one calling for the Butlerian Jihad when AI can reliably separate signal from noise. So, I lean more into LeCunn's camp than Hinton's -- AI is just not smart enough yet. My lab is already consistently running into limitations of LLMs, the poster child of current AI. Anyone working on RAG/Reasoning will have stories of the like. LLMs, like our brains, are still limited in their recall and predictive capacity. This may be due to current hardware, software, and data limitations but I think there's something fundamental. An AI with infinite reasoning will encounter Godel Incompleteness; its attempt to attain infinite data will run into the Shannon Limit. The AI tech we have now is a huge plus. Our institutions are bogged down by the mundane. Judges and lawyers, doctors and nurses, are weighed down by administrivia paperwork. Justice and health are delayed and denied. These institutions are critical for our society to function. I'm banking on AI to relieve the bullshit-work so they can function again. Personally, software has become fun again. I no longer spend hours searching and reading documentation/code to get a JS UI framework to do what I want. Coding agents give good enough answers so I can return to creative problem solving.
youtube AI Governance 2025-06-16T11:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzFu5zZpQq8HJutnD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxMGjY1MJulJ2lXDwB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzKpcfApfbdR3F7A14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxjy3oAYYF_eOIOJr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzUzkK8pIjOg0tXrQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw9HkDvOC7UBhm7BgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx6Dv6C_2Nb4DSaZfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyH7yEM1BRCr5BONah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxqjOFcLo1fM9EkMGx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxHwhNF69lilAc_IN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]