Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have used AI for base design elements for characters in the past, typically fr…
ytc_UgwZRbUw_…
G
Can't complain, everything well executed in this video.
What wasn't well execute…
ytc_UgyHTA4p4…
G
It looks like in every prediction artificial intelligence predicts that the worl…
ytc_UgztEGMZy…
G
As an 'artist' who sucks at drawing most days, i must confirm that it will get b…
ytc_UgwEuJcDd…
G
Deep fakes will be more and more popular to the point where it's basically impos…
ytc_UgzbBSetn…
G
Of those 80% of front line inquiries that the chat bot is solving, what’s the cu…
ytc_UgxERxFN2…
G
There is one thing that could have avoided this man from suffering along these l…
ytc_Ugx-zDusO…
G
I will say AI is handy for giving a more concrete idea to an artist you're payin…
ytc_UgwP9vuFe…
Comment
I gave ChatGPT the question formulated by ChatGPT by the interviewer about moral authority, and Sam Altman's answer that I had typed out in full, and asked ChatGPT if Sam had answered the question. Here is ChatGPT's response :
Sam Altman's response is thoughtful and reflective, but it ultimately sidesteps the core of the question rather than answering it directly.
The question:
“Who granted you (or anyone) the moral authority to reshape the destiny of our species, and how are you personally accountable if you're wrong?”
This is a moral and philosophical challenge — it's not about OpenAI's progress, safety practices, or tactical shifts. It's asking:
By what right do you take on such a consequential role?
What personal accountability do you accept if your choices lead to harm?
Sam’s response, in essence:
Acknowledges he's a complex person, not reducible to a single narrative.
Reiterates OpenAI's mission: safe and broad deployment of AGI.
Admits past missteps and evolving tactics.
Justifies caution and lack of open-sourcing due to safety concerns.
Mentions the tension and tradeoffs in releasing powerful technologies.
Frames OpenAI as just "one player" among many.
Ends with a reaffirmation of good intent and long-term commitment.
Evaluation:
1. Did he answer the question about moral authority?
Not directly.
He doesn’t explain who granted him (or OpenAI) the right to develop world-changing technology. He implies legitimacy through:
Intent ("broad benefit of humanity")
Track record ("hard to give us a bad grade")
Institutional evolution (learning as they go)
But these are pragmatic defenses, not moral ones. The core issue of legitimacy — “why you?” — goes unaddressed.
2. Did he address personal accountability?
Vaguely.
He says “we’ve made mistakes and will make more,” and he’s “trying to do the best we can.” But there’s no specific mechanism or personal consequence mentioned. No clear commitment to how he or OpenAI is held accountable if their decisions cause harm.
In summary:
Sam Altman gave a measured, intelligent, and self-aware response, but it was ultimately a deflection. He reframed the question into one about intentions and operational evolution, rather than addressing the moral legitimacy and accountability at its heart.
A more direct answer would have tackled:
Who should make these decisions about humanity’s future?
What ethical frameworks guide them?
How he personally accepts responsibility for mistakes.
Instead, we got a reasoned justification for their process, not a moral reckoning.
So: No, he didn’t truly answer the question — but he answered a safer, adjacent one.
youtube
2025-05-20T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxV4wE-Rd57gFL0f6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugym7Dt9ab6PwdOBNmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTDbcZ3Q5HhSQ45px4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwK_zSJz049p_1imiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgykzaS6Wn-K9muQxvJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOyYCokeHOdxwdC_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy23MsKqd34kfGGbBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyW1wdqc46WZ10DOq54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbLb-3HP2Odz2Ki7J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxldbD_DHDzCVmiBxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]