Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
*This is such poor research, or worse...* *Most* of the complaints about AI in enterprise you list, are concerns that are *much* more valid for older models than they are with newer ones, with most of the studies you cite being studies that worked with models at least a year old nowadays. The thing is, modern/recent models are *massively* less likely to hallucinate (sources on demand). How is the fact that hallucinations have been going down missing from this video ?? That's just a super weird omission... Either you didn't do the research, or you didn't include it because it didn't fit with a planned narrative... Both are bad... There are in fact solutions to hallucinations, you really should have done some reading of the published research on this before doing the video, it's sort of weird that you seemingly didn't, not only LLMs do hallucinate less with more training and more modern training methods, but there is also a lot of good research that has found ways to reduce hallucination (thinking and tool use being the two big ones lately, but there are many others). The problem isn't AI itself, the problem is lots of companies jumped into it too early, before the models and/or the scaffolding they used had time to mature enough to do the tasks well enough... Btw: They (OpenAI @ GPT5 release) didn't "fudge" the numbers, they had an intern or an AI (both?) do the graphs, and didn't have anyone double-check properly, which is incompetence but it's not "fudging"... The real numbers were there, on screen, the graphs just didn't match the numbers, which is messing up, not lying... If I tell you I'm 300kg, and I show you a graph that shows that 300kg is classified as underweight, I haven't lied about my weight, I have messed up my graph... You know the thing where you generally trust journalists, but one day you read an article about something you know a lot about, and you suddenly realize they just wing a lot of what they write/have very little knowledge/understanding on the topic? I didn't expect to have that happen with Cold fusion...
youtube AI Responsibility 2025-09-30T15:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzG8j2YVhTg3NJUMBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzM3UId6tuDENXTW7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgztKncGvoygx4a_daZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyspVHKIIbiMnkGxix4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0zmLrJnEeFx5pTs54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxD2QSDvqvyCnRML_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgweNe3I2t4OEG5sO_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlysMwc2Lu6jVmqrd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgyUdDtwJk7jis43e1Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyrTzXLDPD26R7NiFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]