Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
14:22 hay i get it people can be stupid but chat gpt lacks proper safty mesures
…
ytc_UgzJNrzW8…
G
Did you even watch the show? That topic was addressed. When the problem with IBM…
ytr_Ugx33wPc7…
G
The way AI is taking over, brands are lucky to have AICarma tracking their menti…
ytc_UgwBfpw2T…
G
@0:18 One Job that AI cannot do:
- Get pregnant and give birth to another Human …
ytc_UgxVrwdMm…
G
Frankly I really would have strongly preferred if they removed the Medicaid cuts…
ytc_Ugxn7hKl7…
G
Smart enough to build AI but not smart enough to understand the effects of it. T…
ytc_UgxxEhdXD…
G
You know how sarcasm is illegal in North Korea well I would like you to realize …
ytc_Ugzp4mhn5…
G
Ai are like kids right now they just requisitate whatever their partners tough t…
ytc_UgzOe7de4…
Comment
There's a big difference between the kind of model that suffices for agentic use vs frontier models. The US focuses on frontier models with the highest intelligence, which is why the US leads in cutting edge models like Mythos. The AI majors and hyperscalers have been focused on training and inference which can only run on high end expensive hardware. The models literally cannot fit in the memory of a commodity GPU.
But a commodity video card can run Deepseek or Qwen with a modest parameter count. They're not as smart, they get more things wrong. But we don't need Einstein for 99% of tasks, and it would be inefficient to pay professors to moderate comment sections. Agents need to do simple tasks like evaluate the content or tone of a message and categorize it or trigger an action. US hardware would be wasted on this task and we've ceded this low margin business because it is entirely contingent on the cost of energy.
The real metric is Watts per token and China's national energy policy executed over decades is paying dividends now. Cheap power enables their less efficient token generation to still compete on Watt's per token and that's why they're dominating the agentic inference market.
youtube
AI Governance
2026-04-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzfSnzWst02KsznDNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8mdUvNIObd0uSLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0AO0fNWQwyXR23b54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZ-nltPK1eP9yln694AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzlI_R6I6jh169VVr94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyuU8SMkG6McRhNgaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyUA_DQcDFZ6VUdb214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHcRRdHIF518tDVil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPhpUcnxHfa8b2cF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzHy8eKwczW26pQIG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]