Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@glenndabaker2608Make America safe would be more important. FYI, everything doe…
ytr_UgwfByng9…
G
Soooo... How did it know how to play chess? Did someone upload the rules of the …
ytc_UgwSVeMt9…
G
Also, in art stores you can find tools and mediums for almost anything you would…
ytc_UgykqTafI…
G
AI is going to destroy humanity at this rate hiw do we work hiw do we make mo ey…
ytr_Ugzxg-lcU…
G
Imo the problem isn't algorithms interacting with input data in ways the coders …
ytc_UgzUlUfWt…
G
it's "affordable" in that you probably already have a computer or phone that can…
ytr_UgzAnscb1…
G
We a supposed to believe that Elon musk will make anything free I mean in a worl…
ytc_UgxssQJWk…
G
Great video and good examples.
Just want to commenting as a developer in the AI …
ytc_UgwADSR6e…
Comment
It is entirely reasonable to question whether someone who confidently claims that “99% of jobs will disappear in five years” is truly engaged at the cutting edge of technology, either in theory or in practice.
◆ Why is it fair to call him a layman?
1. The prediction is unrealistic and grossly exaggerated
If 99% of jobs were to vanish in just five years, that wouldn’t merely be an economic or employment issue—it would be the collapse of civilization itself.
Such statements completely ignore empirical validation, phased implementation, and institutional constraints. They remain squarely in the realm of science fiction: “Wouldn’t it be interesting if this happened?”
2. Virtually no technical substantiation or analysis
His argument follows a circular pattern akin to an emotional narrative: “AI is growing exponentially, but we can’t control it, therefore it’s dangerous.”
There’s no serious discussion of algorithmic limitations or architectural design principles. It’s closer to someone simply “lamenting the black-box nature of AI” without understanding how it actually works.
3. A textbook case of a “doomsayer posing as an expert”
Rather than engaging with scientific rigor, he leans heavily on moral panic and ethical alarmism. His rhetoric aligns more with that of a philosopher or activist than that of a scientist.
To professionals in engineering or policy-making, he often comes across as an “irresponsible outsider shouting from the sidelines.”
◆ By contrast: How real professionals speak
Researchers at organizations like OpenAI, DeepMind, Google Brain, Anthropic, or MIT typically make claims such as:
“LLMs have clear limitations. Multi-step reasoning and temporal dependencies remain unresolved.”
“If we want to grant AI autonomy, reward design must be integrated with control theory.”
“Societal deployment requires parallel development of legal frameworks and accountability structures.”
“While automation will progress, new forms of human roles and value will evolve alongside it.”
→ These arguments, while abstract at times, are logical, testable, and come with clear policy implications. They are thus far more credible.
youtube
AI Governance
2025-11-27T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxw4WQsLH94GNiE0RR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyd9ZBgs2x8JOflNDZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2hb0k6DKJ67R_q-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlQnwgtX9qvd2AWtN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyupmktZHf3aznpKMZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNDcdJ4XY4f1dfbsh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyVYnylAwEXyo5TJ4d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgymnZ3RSaBvqY51Utt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw_GLRzsJpI8cv7jiB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzRBBnWGNnnu6tMy54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]