Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He can simply jump into the driver's seat. Self-driving cars are designed to yie…
ytc_Ugyij91Ne…
G
now all you need to do is do a prompt for the anti Christ then tell the other AI…
ytc_UgximL254…
G
Awesome!!!
My own take at poisoning art involves adding extra fingers to hands…
ytc_UgyFqvo-Z…
G
19:24 Yes, this is my experience (the first programmer, not 2nd) with using Clau…
ytc_UgxC_IpbY…
G
I just asked two different main AI models what time it was in NYC and it was off…
ytc_Ugy1ppe2V…
G
I’m sure AI is reading the comments and watching your videos (especially WF) to …
ytc_UgyuulqXn…
G
I hate that it's even called "AI", right from the start it's all a lie.…
ytc_UgyiQfak_…
G
The "AI is great for disabled people" arguement is INFURIATING to me. I have mul…
ytc_UgzcyK-zf…
Comment
this is irresponsible (misinformed) fear mongering. LLMs have no comprehension of any kind. Subsequently they have no agency. No agency means no agenda. At worst they can be manipulated (as per your Grok example) to say very silly things. That only happens when some human put their thumb on the scale.
It doesn't sound like you have any clue as to what you're talking about and instead are pulling the most volatile quotes out of context to suggest that the industry is poised to usher in the Devil (by whatever name you want to call it).
Very silly and highly irresponsible to create content like this. Sorry.
youtube
AI Moral Status
2026-01-14T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwW3cl05Dz4wDATZ3F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz3r2ej1ONlIXhp-a54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz-YDYwReeebpr2JCB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzlM8qsHK5hZVaomVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugxaq4SMxkJrx4zdRI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugz--c8zlFVhkH8lNUZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwg8W_eR7B4puSvDtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugz0TX5QjQSFRnzavjl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},{"id":"ytc_Ugx18kz7Xgl7WCFHaqB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugw45GUpcK-MwinOzqd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"})