Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is NOT a failure of AI. Its a failure of IT departments NOT understanding what technology they're Applying! This is the first video (at 30% i admit) of yours that im not (fully) agreeing with. Its like making a junior dev responsible for something a seasoned dev should be. It's ludicrous. AI Is absolutely effective, just as a Knife and Fork are absolutely effective. But for the right job; the issue is, currently Hype Marketing makes it seem like AI is a fixer-all, which it certainly is not (yet) but it is darn capable of elevating efficiency if you're smart on how you implement it. Also I had a very interesting discussion with claude the other day about architectural changes.. Because I pretty naively thought; wheres that liquid AI architecture that we heard about a year or so ago, and it turns out there have been a lot of different architectures tested.. but the reliability of the transformer architecture to function somewhat predictably (in terms of worthwhile outcomes) at large scale training, means its a more reliable investment for these mega dimensional LLMs. And even STILL it can get stuck on a false non-loss solutions during training (risking millions in the process, obviously not really as researchers tweak the weights and probably tons of other stuff :D) .. So the industry is pretty much banking on the current architecture i think (ive not done any further research yet).. given the hallucination issues, which is a feature rather then a bug, im really interested to see how industries will integrate AI.. Im in one where its absolutely a buzz word.. and rightly so; IF we are actually smart about it and NOT think its capable of something its not. Lastly.. Even though hallucination is a little different from humans unreliability in reproducing facts; its not like the internet before LLMs was so much more trustworthier: Information people found could be outdated, or simply false because its not contextualised sufficiently to see that, or simply a interpretative error on the user's side. There is a parallel there, that i think goes overlooked far too often in my opinion. Obviously hallucinations and human mistakes are not the same in their mechanisms, but are in outcomes, and thus interesting :)
youtube AI Responsibility 2025-10-10T21:5… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy8a5tpa2GW2PlIKN14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_JjQ7UtECLddJ30d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrSnWHJN_G48R1iah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEDEHrFe37ykA8Mot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxtL8QSsp2C8Ke2tJZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxWN9umtZu66zgf4aJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxgicct-FUxyhw5Q6Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgzQFA6eha0XtysHwJx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzTefof0WbT_0akVCJ4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyiooXALIPhlz1Nqd94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]