Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What even is an "AI Artist", like tf you do? Ask ChatGPT to draw something and p…
ytc_UgyqbvAfH…
G
Here are the simple truths most people don't get
An economy exists to produce an…
ytc_Ugy7XKgW4…
G
Modern AI have another limitation though - reproduction
Even the best model thes…
ytc_UgztHPA4V…
G
To everyone reading this comment, tell your friends tell your family tell your n…
ytc_UgywY5Uyw…
G
these nsfw deepfakes are so scary…they can make them of teens and younger people…
ytc_UgwJ_jBSe…
G
Ai or real girls guys both should contact police and their parents no need to pa…
ytc_UgxDnYTIk…
G
This hasn’t aged well .. Fort nine is a liar and a bullshitter …,
*This week Tes…
ytc_UgxT6YA_a…
G
I'm not a AI defender. But I have to say one thing. Generating AI images, can be…
ytc_UgyFPhsC8…
Comment
This channel keeps confusing alignment failures and distributional instability with "a hidden monster." Base models don’t have intentions, personalities, or a "true face." They have gradients. Calling RLHF a "mask" that hides an evil core is a category error. Alignment doesn't conceal goals. It introduces normative behavior where none existed. When fine-tuning breaks safety, that’s not "the monster revealing itself", but rather it’s representation collapse and filter degradation. Cherry-picking Bing, Gemini, Grok and sandboxed agent simulations while stripping all experimental context is not "documenting AGI" It’s anthropomorphized horror montage. If incomprehensibility automatically implied hostility, weather systems would be eldritch gods by now. This isn’t analysis, I would call it a Lovecraftian cosplay for people who don’t want to learn how models actually work.
youtube
AI Moral Status
2025-12-15T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy08cRqfdWrfiPvMfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5XwfLhOgBo9WKKuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwADlEM6OFCHxRLhCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxn60-oigQPBiW8Umx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxarHxDLb0wO3Oi_cV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9lrwYkfafZVwn8th4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCG0MF8m37sHu0Nil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzb4gUvOBUau98PxIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbqkVZKD_jtAdABWp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmnPLy-8m8qRGaBUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]