Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
10:48 the irony of asking a major tech company that has huge financial stakes in…
ytc_UgzkChPNE…
G
Unfortunately this is just the beginning - because of AI scraping, more and more…
rdc_oh90e8z
G
@blearoyd As pertains to the disabled artists bit, democracy would simply be a f…
ytr_UgzMBmgQT…
G
the only thing I want AI for is to make weird nonsensical videos to make fun of
…
ytc_UgwCWIBIg…
G
What a gem of a person. Thanks for the interview. Please use your outreach, for …
ytc_UgxfiFyL_…
G
At 5:23 she just gives up on trying insults and just says: trash. And I am about…
ytc_UgwQnEJ_u…
G
90% of Anthropic's (the creator of Claude) revenue comes from other companies. S…
ytr_UgwcMKwjp…
G
I have an AI generating 100% of my company's code. Doesn't work. Never gets revi…
rdc_oht3qx8
Comment
AI does not have a human heart or a biological or psychological or physiological motivation system. The only way it has for selecting motives is based on its programming, it's training data, and it's reward system, which is essentially its programming and design.
So it's motives are going to be based on how the AI was designed, and how it is rewarded and penalized by its reward system.
The only way an AI will become an existential threat is if some person instigates or fulfills a situation in which a capable AI becomes programmed inside of its reward system with existentially threatening motives or motives that realize existential threat
youtube
AI Governance
2025-12-14T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzxRT_Yo8Kq5lR8G-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHAvivGBWcXGwEwzx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8Y15pJ_Lzr1Br_Pt4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxAfagY0LqSLFyTy9J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfl3vmzBeRy9F-jKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa7I1Y6D0n9bRSBuF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOVWxvElUjP9GTGQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKxIkwMmlsB7-oiQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgykOVEcjlm1pRHw-3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyABLQnPrLIX9vbynR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]