Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI going rogue? Well, that didn't take long.. We all knew this would happen at s…
ytc_Ugz0-q2EY…
G
> Just send them all off to MIT to get PhDs in machine learning or quantum ph…
rdc_denly7b
G
> do until civilization is falling apart right in front of their eyes
No.. …
rdc_d0fd480
G
It's very difficult to control China innovations; why don't work together to fur…
ytc_Ugz1nC1x1…
G
> How do you anticipate automation of medicine, law and accounting will be ef…
rdc_cz31etc
G
Let me guess, this was because of Shadiversity's video on AI, isn't it?
Not…
ytc_UgwPTQnj5…
G
do you like that youtube is auto putting ai upscaling on shorts when you upload …
ytc_Ugwp2Z2mR…
G
I would pay more at this rate just to be confident the person im paying is an ac…
ytr_UgwPcI1hl…
Comment
*This is such poor research, or worse...*
*Most* of the complaints about AI in enterprise you list, are concerns that are *much* more valid for older models than they are with newer ones, with most of the studies you cite being studies that worked with models at least a year old nowadays. The thing is, modern/recent models are *massively* less likely to hallucinate (sources on demand).
How is the fact that hallucinations have been going down missing from this video ?? That's just a super weird omission...
Either you didn't do the research, or you didn't include it because it didn't fit with a planned narrative... Both are bad...
There are in fact solutions to hallucinations, you really should have done some reading of the published research on this before doing the video, it's sort of weird that you seemingly didn't, not only LLMs do hallucinate less with more training and more modern training methods, but there is also a lot of good research that has found ways to reduce hallucination (thinking and tool use being the two big ones lately, but there are many others).
The problem isn't AI itself, the problem is lots of companies jumped into it too early, before the models and/or the scaffolding they used had time to mature enough to do the tasks well enough...
Btw: They (OpenAI @ GPT5 release) didn't "fudge" the numbers, they had an intern or an AI (both?) do the graphs, and didn't have anyone double-check properly, which is incompetence but it's not "fudging"... The real numbers were there, on screen, the graphs just didn't match the numbers, which is messing up, not lying... If I tell you I'm 300kg, and I show you a graph that shows that 300kg is classified as underweight, I haven't lied about my weight, I have messed up my graph...
You know the thing where you generally trust journalists, but one day you read an article about something you know a lot about, and you suddenly realize they just wing a lot of what they write/have very little knowledge/understanding on the topic? I didn't expect to have that happen with Cold fusion...
youtube
AI Responsibility
2025-09-30T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzG8j2YVhTg3NJUMBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzM3UId6tuDENXTW7J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgztKncGvoygx4a_daZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyspVHKIIbiMnkGxix4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0zmLrJnEeFx5pTs54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxD2QSDvqvyCnRML_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweNe3I2t4OEG5sO_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlysMwc2Lu6jVmqrd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyUdDtwJk7jis43e1Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrTzXLDPD26R7NiFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]