Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Technology seems to be ahead at least 40 behind closed doors. If that's the cas…
ytr_UgwkRb4ba…
G
Great points. I expect a sort of robot tax. That said countries kind of have to …
ytr_UgwtKHXiT…
G
So AI bots took a look at the world's history and decided we'd be better off wit…
ytc_UgythoQZ8…
G
Amazing video pikat, you only forgot to mention that generative AIs need a LOT o…
ytc_Ugzum1hIv…
G
Wow. Our value is being based on our output. Not love, compassion, joy. Does…
ytc_Ugz7l4nG0…
G
Within another decade, it may be difficult to find paying work that an automated…
ytc_UgxFKcipi…
G
How many millions of people just accept the current thing as normal? These compa…
ytc_UgxAn51JA…
G
Will he protect you from the AI? Nevermind SOLD, I'm gettng one before they sell…
ytr_UgxIdC_Az…
Comment
It's a pity to see how a hostile and unprepared host can derail a (potentially!) very interesting conversation.
Asking questions (how embarrassing they may be! that's the journalists' job, and I respect that) without leaving the time to the guest to think and answer is a. unrespectful to the guest, who cannot express his thoughts and reasoning on topics that are crucially important to him and their business, and to all of us b. strengthening a well-established and boring journalistic attitude of firing fast, provocative questions and commanding equally fast, poorly reflected, shallow answers (an attitude that may help applause, clicks and laughs, but is not conducive to a thoughtful public debate and understanding of the issues) c. shows an embarrassing lack of preparation in the interview (which is the interviewer's job).
Topics here discussed ranged from authors' copyright, AGI definition, OpenAI company growth, safety switches, and moral standing, and they show how much the debate on AI is key in many crucial topics in today's society. It's a real pity that we didn't have the time to listen to Sam Altman's answer, here.
There is one final aspect that the presenter is ignorant about, and Sam Altman, conversely, is adamant about. This is already happening. It's not a threat or a provocation. It's a fact, that humanity (and not only the company OpenAI) is developing a new set of tools that will change the course of our human history.
Would you have asked that prehistoric man who invented the fire, or the wheel, dynamite or the atomic bomb: "who gave you the moral standing? Have you done a thorough assessment of the risks? How many people will be burned in fires due to your invention, or crushed under those wheels?".
This is the way progress happens, by stretching further and leaping forward, frequently without a complete understanding of the significance of that leap.
Bringing the political and moral lacets in the conversation (even for the respectful purpose of honesty exploring the societal risks) ignores this one main fact. AI deployment is happening right now, and humanity has to direct it and control it, to our benefit, instead of opposing it. It's not easy, because humanity doesn't know exactly how to do it, but it's not the first time, and it worked out quite well, in the past.
I think that genuinely guiding the audience into the complexity of these topics, in the presence of a man who probably knows much more than any of us in the audience (and has probably reflected about the impact of their work millions of times) would have been a really exciting and insightful conversation.
youtube
2025-09-10T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxf-2BGeaOPfIIqGwt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw8I0ggky8zFYw_xhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvzpBzfon3jDHX_It4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXONdXb0P9szKkDPN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXIZ22xLAkbsxSpqt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxNBtkcjkVjOLkJe1t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjDyJyWwAfsU6H_lF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBeoOOzwIUsBM6lxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzl0cu2kN5ns2u0MdR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDrd9PTa40C_tmPrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]