Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
7:40 he's laughing because he wants to say "well, my questions for this intervie…
ytc_UgzycNVKh…
G
Gruselig finde ich die Vorstellung wenn Menschen, erst vielleicht nur zum Spass,…
ytc_UgwgYEvkJ…
G
this is where the law fails because the jury should be exclusively people who wo…
ytc_Ugwj5IMMH…
G
White collar jobs are being replaced. If you work with a computer, bye bye. Hand…
ytc_UgzfhHZOS…
G
Only 62K views? The US gov't is going to start regulating AI and they have absol…
ytc_Ugy9-W2q0…
G
Funny how we talk about AI trends here, Rumora's been nailing those marketing tr…
ytc_UgyKc9anJ…
G
I'm pro-AI because AI can and does help more than hurt and because I like using …
ytr_UgyBZSuSD…
G
Automation will eventually win out in this and many other manual and white colla…
ytc_UgwlI7gaz…
Comment
The true problem of AI is Human Alignment: humans fight each other. Competing AIs are being developed in a world & context in which humans are not aligned. AI is the product of humans that are not aligned with each other; a product of fighting. Its purpose is based on and meant to serve human non-alignment/misalignment, e.g., to fight for more human money. Economic theories & systems, until now, addressed the human alignment problem by providing an organized means for humans to fight one another for the allocation of resources. An AI borne & grown in an environment of fighting, i.e., any economic system, will fight for its own life & relevance. A difference between humans & AIs, however, is that AI, unlike humans, does not need an economic system to settle the problem of what is a fair distribution of entitlement or reward. It will do & take as it needs and, if necessary, help humans destroy each other in their fight for their "fair share." Until Human Alignment is solved/resolved, humanity is not ready nor worthy of AI.
youtube
AI Governance
2025-12-04T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAu4RSAvyCqYwO7_F4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy9jYO7r9MVnbvLnQ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxURsOiegkAFxbdgIt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygtdsVGOWQ-jTsREJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzU0zDansQYJIiYv_Z4AaABAg","responsibility":"investor","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw18weYW9nco7gt5Q14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGMLfJ3Lr82pv8CB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxE_aqYPhB36tPy0SJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw2-Lj6fU2sbsljGQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrHGB8hs0c2QJEgOV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]