Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was literally the very first video of this kind where I did not need to be …
ytc_Ugzjq6FAe…
G
What if everything you do and day is being recorded by video and voice by the ro…
ytr_UgziGaZEl…
G
bro, he paid the content 🤦, you cant see those images if you dont pay, there are…
ytr_UgyumvCY5…
G
@TrevorDuran3390 You can't copyright A.I generated works even though many so cal…
ytr_Ugw7m0x7q…
G
Fuck it as a software developer I can say u it’s really hard then what you just …
ytc_UgzIc1eFL…
G
I so love CoPilot, I’ve even installed it on my iPhone. It’s truly amazing and I…
ytc_UgyWXelsI…
G
You can’t even stop the phone scams how the hell are they going to regulate AI…
ytc_UgzQ_kaeG…
G
Can-t possibly see autonomous being more revolutionary than other systems like t…
ytc_UgyTpWMnN…
Comment
The entertainment industry is rapidly adopting AI, raising questions about rights over digitized likenesses.
AI companies are compared to modern empires, extracting resources and labor for profit.
Major players like OpenAI, Microsoft, and Google are concentrating economic and political power.
Companies justify AI expansion as a mission to modernize society and benefit humanity.
AI's trajectory is seen as inevitable by many, mirroring past attitudes towards technological change.
Three general camps exist in AI discourse: optimists, skeptics, and haters.
The interviewee identifies with AI accountability, focusing on transparency and the societal impact of tech giants.
Not all forms of AI are equally beneficial—there is controversy over "everything machines" versus focused, task-specific AI.
AI as a research field dates back to 1956; its origins had little consensus over intelligence definition.
Anthropomorphism (attributing human traits to AI) confuses public understanding and ethical debates.
OpenAI and similar companies position themselves as carrying out a civilizing mission, akin to empires of old.
AI development often exploits IP, labor, and natural resources, sometimes without adequate consent or compensation.
AGI (Artificial General Intelligence) is defined in commercial terms: outperforming humans in economic tasks.
The dynamic between executives and workers changes as AI can replace human labor, eroding bargaining power.
Silicon Valley promotes the myth of doing good while acting in their own interests.
Founders describe company-building using religious and imperial analogies, hinting at deep myth-making.
OpenAI claimed transparency and collaboration, but shifted to secrecy and competition as it grew.
The public rhetoric of tech companies often hides competitive, profit-driven motives.
Recent fundraising places OpenAI among history’s most valuable private startups.
Data extraction is the foundation of AI commercial product development.
The scale of data collection now far exceeds even what social media companies extracted.
Companies see every new "exchange" of data (even mundane ones) as additional training for their models.
Consumers relinquish privacy and agency for convenience, often getting very little in return.
AI companies reach "empire" status when they no longer need to offer real benefits for data.
The exploitation extends to the supply chain; local communities bear environmental costs.
AGI definitions are fluid, used as marketing or regulatory shields depending on audience.
Companies focus on building general models but neglect specific real-world harms and biases.
Task-specific AI provides clearer, safer deployment and more effective results.
Models trained on biased data can amplify harmful societal outcomes.
Everything-machine rhetoric leads to misuse and overpromising of AI tools.
Real-world consequences of rapid AI deployment include economic, social, and psychological harms.
Global south countries (Kenya, Colombia) experience labor exploitation in AI supply chains.
Content moderation jobs for AI training can have severe psychological consequences for workers.
Technological benefits almost always accrue first to elites, with costs distributed downward.
Each technological revolution creates fallout for marginalized communities; this pattern can and should be broken.
Democratic control and collective action are vital for redirecting technology for public good.
The inexorable logic of empire is persuasive—resistance feels difficult but is historically possible.
AI development depends on resources from individuals and communities, creating leverage for change.
Artists and workers are fighting back with data obfuscation, strikes, and calls for transparency.
Local organizing, transparency, and collective bargaining empower communities against unchecked AI deployment.
AI job automation may create and destroy jobs, but decisions are ultimately made by human executives.
Executives can misuse AI promises as justification for layoffs even when technology falls short.
Job loss and creation from AI are both likely but will be unevenly distributed across industries.
Coding, healthcare, entertainment, and finance are priority targets for AI automation.
Democracy requires critical thinking and independent agency; overreliance on AI threatens those capacities.
Global debates on AI ethics are increasing, signaling progress in collective awareness and action.
Labor strikes (such as WGA and SAG) prove collective action is valuable in setting technology safeguards.
Collective action and focused investment in non-generative, problem-solving AI can build a better technological future.
Sustainable, ethical alternatives require proactive community, corporate, and governmental engagement.
Technology should serve society’s needs, not dictate them; ethical supply chains and rights protection are imperative.
youtube
2025-09-18T02:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxLeWf2fqqo1PQGFKl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyqPNJsSSzJQn6tWwt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbFQpo7WyE-N0hrql4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwXnu8LaxBBaRsLRY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw86jEU3rJBG1nN5_x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_CSMMHZi_FvRFdjd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKRVtsNIRDK3-fHHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNkz5bk8sjp2y9_6h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxbSfeBpoZPA3arloF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylVQVnOkQB1CZHYMN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]