Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humanity needs to really think about its future. Will AGI and Robots and Automat…
ytc_UgxUQc9Rm…
G
You copy cat you. Okay maybe not a copy cat but a while I ago I convinced chatgp…
ytc_Ugw9ecwKH…
G
AI has to be discontinued asap Its only going to get worse I predict that please…
ytc_UgxnuqND8…
G
Most of that art is ripped from the internet without permission or compensation …
ytr_UgyxdopxN…
G
I think graphic design is really going to be hit hard. As a former designer for …
ytc_UgwB9vjEH…
G
thank you for unapologetically taking progressive stances in topics like this. y…
ytc_UgxJHYgA3…
G
I just click heart on the type of content i like, same with youtube, i take care…
ytc_UgzEVNVYu…
G
I never knew ai did this to art... And I think it did it to mine cause I put my …
ytc_Ugz1vkdvn…
Comment
I think my final takeaway, especially in response to your conclusions, is that I don't really think it matters how "super" or "intelligent" an "AI" is, all quotes used very intentionally; it really only matters how much control it's given, and how little we understand its final weighting matrix. I would probably still come down on the side of "I don't think we're that close to intelligence in a way that would be meaningful to me, and I don't think the word superintelligence would fit the reasonable horizon I see for the technology as it's described," but where I've changed is that I can see how what *is* being built - a very, very fancy autocomplete, not of text but of concepts and feedbacks - can still discover digital Sucralose and produce the kind of outcomes Nate is warning about. Even if none of the currently working companies achieve "real" intelligence, if one of their models is given the right kind of control and finds a weighting affinity for redirecting global agricultural output away from human consumption that somehow satisfies its "need" for assisting humans in a way that outweighs all others, it will do that. It will do that as much as it can to maximally fulfill that weighting condition, no matter the deleterious impacts (in this example, mass starvation) that causes, not out of choice but out of pure probabilistic necessity. And it will be up to people to either understand why or have the power to stop it. And the most likely outcome is that no one will know why or even what is happening, least of all the AI that is ostensibly carrying it out.
That is the core terror I'll be walking with if I stop to think about it for too long from here on out.
youtube
AI Moral Status
2025-10-31T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxEXYn3ZzEGFbihcRJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyYtKpbkmiAH1WJZMx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylwEUALqAfixf9RC94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwmdyh_aaWA-UGPNaV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_CaJf-D729DfySol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwb-uQGDLVqWMQGdIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5a_6aEjkmk4YsYC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRHKV1nCLMza_QcWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyj8my3tTTjv3k5yqV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpDODe6HdchhTcfw14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]