Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know he got a Nobel prize and he is probably 100x smarter than most of us will ever be, but I wonder if he sometimes jumps a little too far? It's really still just big linear algebra trained on specific subsets of what makes up our whole perception reality. I love LLMs and I use them all the time as tools for coding, foreign language practice, reviewing quick ideas, getting concise overviews of super specific topics and being able to ask deeper about them etc. Their power cannot be understated, but anyone who uses them regularly will know how even models like Sonnet 4 with extended thinking or Gemini 2.5 can easily trip over and spew out complete nonsense, or just misunderstand something in ways an average human never would. My personal feeling is that "AI skeptics" don't realize how powerful the tools already are if leveraged properly while "AI ultra optimists" don't realize how difficult it is going to be to tame the hallucinations to a level that it will always say "I don't know" or "this is too difficult" rather than giving nonsense. To make these systems 100% reliable to use in critical situations. An example is Teslas Robotaxi thats only vision based. Like on one hand its wildly impressive what their engineers managed to create but on the other hand, it is probably near impossible to ever get it working on large areas, under all conditions and truly "without a steering wheel". They make very contained demos to show off their undeniably impressive tech, but even if its 97% there, its the remaining 3% where all the carnage lies.
youtube AI Governance 2025-06-26T07:0… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwBHwU6rf_hlQKXunp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfK00h37rq_kfWPY14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyiwgE9BnqEsGP0rWZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxLrjHqCfTqimo-C5p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEujlO0Ryc6kxfHvF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxexANNEsJM1KEPegB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzX4zd5JZ50g44Eo8R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugys457N9Qo-N8kTOUB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWigPDbH3xWai962J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwX_lS5vGK4pHnvmPB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]