Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Were we more wealthy when we required 90% of the popultion to farm just so there…
ytc_UgxFao4gn…
G
Women do it better. AI would’ve been implemented in a way that works with people…
ytc_UgxrY1nwt…
G
The fact that you are this adamant against the very existence of ai art lends cr…
ytc_UgyzrNWFA…
G
All these people have be brainwashed and you say AI is not conscious, its runnin…
ytc_UgxR-dZBH…
G
There have been SO MANY statements only 14-15 minutes in that I feel you should …
ytc_UgyGzV4p_…
G
Versus OpenAI stealing our data and giving us no value back.
DeepSeek stealing…
rdc_m9gfpxq
G
Every quarter that passes without Teslas fsd and robotaxi releases, increases co…
ytc_UgzJXlDh7…
G
Hi! I believe in Jesus Christ who has saved me to be with Him for all eternity ,…
ytc_UgzxVdG4q…
Comment
I feel like the thing we miss when talking about misaligned AI is that the corporations developing AI are themselves non-human intelligent organisms misaligned to human thriving. You can argue that humans run corporations but a human misaligned to the corporate directive of "maximize shareholder value" will be removed and replaced with one in alignment. The incentive structure that corporate intelligence responds to encourages designing AI to be misaligned from the start, favoring addiction and subscription over actual utility.
youtube
AI Moral Status
2025-11-01T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyPq1T_w8e9R5FY054AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRabpPg-Yqo24Smmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6Zn1oPjiCtz5tbLV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHBHOAKYeSQpnNNrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwm9rEGyvc9hqTVxaV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwaf8pzYoaKV0wpBx14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyaDvk0iSO2EUnXPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx5Ipo3CfZjr63RfR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaM1AJbmaQs_IvumF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx038np7EB2vh-X1e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]