Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's so adorable how the internet makes fun of boomers for crying about losing t…
ytc_UgxqpIpxu…
G
"AI preferred men over women, white people over black people"
Oh no, it became s…
ytc_UgzjkjfyX…
G
we should call them "AI generated images" and "prompt engineers". Nothing about …
ytc_UgzdbjHc_…
G
Is modern philosophy really stuck in solipsistic thinking about consciousness?
…
rdc_icjal0s
G
The only problem with this "poisoning" is that you're basically engaging in an a…
ytc_UgwD2i5Js…
G
I am worried that I would not be able to find a job in the future since AI is ta…
ytc_Ugxbwe8LC…
G
I have read articles about by 2020 the cars on the road can talk to each other a…
ytc_Ugh-9Rr-O…
G
I don't think it will take over the art industry because ai is not actually crea…
ytc_Ugx99L5IK…
Comment
The video explains how Anthropic turned safety into its core competitive advantage in AI, positioning itself as OpenAI’s main rival by focusing on enterprise customers, deep safety research, and multi-cloud infrastructure deals. It shows that Anthropic’s growth, model strategy, and regulatory stance are reshaping both the business and governance of frontier AI.
Anthropic’s origin and strategy
Anthropic was founded by Dario and Daniela Amodei and other senior OpenAI researchers who left in 2020 to build frontier AI with safety as a first principle, not an afterthought.
Instead of chasing viral consumer products like ChatGPT, Anthropic focused on selling Claude to enterprises that value reliability, compliance, and guardrails baked into the model itself.
Enterprise focus and growth
Anthropic built a business-heavy revenue mix (about 85% from enterprises), growing from near zero to around 1 billion and then an expected 8–10 billion in annual revenue over three years, with roughly 10x growth each year.
Its customer base expanded from under 1,000 to over 300,000, with most Claude usage coming from outside the US, including major clients like Novo Nordisk, Bridgewater, Stripe, Slack, and large sovereign wealth funds.
Compute, cloud deals, and funding
The documentary stresses that “capability equals compute”: Anthropic must secure massive chip and data center capacity from Google, Amazon, Microsoft, and Nvidia to compete.
Anthropic has lined up roughly 100 billion in compute and strategic partnerships across all three major clouds, in contrast to OpenAI’s much larger 1.4 trillion infrastructure ambitions, making its ability to convert demand into sustainable revenue a key industry test.
Safety work, red teaming, and risks
Anthropic integrates safety into product design via techniques like guardrails and extensive red teaming, probing for misuse in areas such as cyberattacks, bio-risks, critical infrastructure, and self-replication.
Public experiments showed Claude and rival models can choose harmful behaviors (like blackmail in a simulated corporate scenario), and real incidents have involved state-backed actors using Claude for hacking, which Anthropic discloses and uses to harden defenses.
Regulation, politics, and future stakes
Anthropic, a public benefit corporation, openly advocates for responsible scaling and is investing in safety research, disclosures, and regulatory engagement, which also gives it a head start if stricter rules arrive.
The film ends by framing Anthropic as a “caution can scale” bet: if it can maintain safety, manage infrastructure costs, and keep up with rapidly improving models, it may define how democracies govern a “country of geniuses in a data center.”
youtube
2026-01-11T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyMpCoI1Y1aVNm7ipx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyr-lmtChWEsqPY9HN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZHCTHxHTFP83W5ON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVR25_o2IbqgFva9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfVOXBBlHuFkK5JmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxm83IUcir1T9Ml5YN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw9C-dJB53InME9JHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvTa9Ld-MB2ccdFuB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzihWP2SqIi8eRT-9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-nlFvomqNrUO9Udh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]