Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like how is AI going I think at one time we can't even walk and will use for AI …
ytc_UgzSNUh3O…
G
I had a massively long conversation with ai to see if I could watch it learn. It…
ytc_Ugy1TwHxW…
G
Good breakdown. But you forgot to include: false diialectic / false dichotomy ak…
ytc_UgxRuO7oI…
G
Calling an AI Artist an Artist, is like calling the guy topping off his ratty BM…
ytc_Ugy1lfEyx…
G
So...the AI is going to decide who gets pumped full of prozac at 10 years old bu…
ytc_Ugz5f-ij7…
G
I anticipate AI music will become so overwhelming, like a tsunami, that there wi…
ytc_Ugw2Tqrj-…
G
People like this always always perfectly fine with censorship so long as it goes…
ytc_UgyjvWoAD…
G
This episode is loosely based on my work (but using public HMDA data) - Nam, Ton…
ytc_UgxLzIa-J…
Comment
I'm not sure intelligence is a difference in kind or even in scale, but maybe in scope. There was perhaps some acknowledgement of this from the start; AIs wire up multiple domain specific capabilities. Humans are trained on the totality of their experience, which no AI comes remotely close to in terms of breadth. That's why "millions of miles" is just marketing and why a self driving car occasionally still makes a mistake of a kind a human who has never driven before would never make.
Though an AI only needs to be good at human psychology to award itself the mechanical turk and social engineering backdoors, especially if alignment is applied at training (if at all) and not as an ongoing filter for the AI's interactions.
youtube
AI Moral Status
2025-11-01T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxyPq1T_w8e9R5FY054AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRabpPg-Yqo24Smmd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6Zn1oPjiCtz5tbLV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHBHOAKYeSQpnNNrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwm9rEGyvc9hqTVxaV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwaf8pzYoaKV0wpBx14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyaDvk0iSO2EUnXPl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyx5Ipo3CfZjr63RfR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzaM1AJbmaQs_IvumF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx038np7EB2vh-X1e94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]