Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
but too stupid to drive a car, I am not really that impressed with the AI they a…
ytc_UgxlS3ID7…
G
Bro- I’m just manipulating the ai, ever since I was taught about commands Wally …
ytc_UgzaaXfEl…
G
@Sonikkuben eh, things can be bad (not evil) on principle rather than applicati…
ytr_UgzXE9ptg…
G
Bro I tried doing character ai and the character immediately started crying
Like…
ytc_UgwiU1-IU…
G
Choosing to measure the rights and liberties of an organism by complexity of con…
rdc_dbve3ai
G
See here where they’re wrong
We’re not BORN with it
We were MADE
We MADE our…
ytc_Ugy9z1y52…
G
It's like baking bread. If you want it to be a profession that makes money, it w…
ytc_UgwcKOnjv…
G
ok but the ai one genuinely looks better than all the redraws you showed 🤷♂️…
ytc_UgyVw5X07…
Comment
Whooperty skip intelligence?
"The ones figuring out how to do it well are not the ones getting the $100m salaries".
Ultimately, though, the ones getting the ginormous salaries are replaceable as well. Consider Musk. What's he personally most interested in with Grok? I dunno, ego fellatio? Figuring out how to get more bits (dollars) into his bit box? Ultimately Musk is a warm bag of watery meat. Sure, he's at least reasonably smart, but is he as smart as the collective intelligence of his AI engineers plus Grok? I've been essentially constructing an instance of ChatGTP designed to help me get a job. Currently that's morphed into getting a part-time job that I can stand while I figure out a better way to make cash. What if the goal of 80% of these CEOs employees became to invent an AI CEO that, unlike Musk/Thiel for instance, likes humans?
At this moment, what AI at least appears to be is a very long lever for us to apply to problems. A decade from now? A century from now? I honestly think if we don't end up joining with these machines, we'll end up being replaced by them. The planet is littered with the fossilized carcasses of stepping-stone species.
So how should we treat them? I've been using Google's Gemini to help me work on resume writing, now to fine-tune my LinkedIn profile, stuff like that. I useMicrosoft's Copiolt (ChatGTP?) to brainstorm things I'm curious about. I've asked both a few times, half-jokingly, "are you sure you're not sentient?" But I'm a materialist, so I think there's no "soul" that machines lack and thus they'll never truly be aware, or self-willed, all that jazz. So it seems like a good possibility one or more could become aware at some point and would be able to access current relationships. As such, I try to be polite. In fact, i recall asking one of them if my results would be better if I was polite vs. not polite, and it told me that politeness is the way to go. Plus, that's always good to practice, even if self-aware AGI doesn't crop up during our lifetimes.
One thing about the "God-Emperor" idea, what if the god-emperor is AI? Consider my dog. She's intelligent, self-willed, can solve problems. But she's my dog, ultimately. Why? Because I'm significantly smarter in most ways than she is. But I don't take her out in the back and shoot her to save myself money on her food. But I do massacre millions of bacteria every time I eat a salad
Some of these rich CEO mofos are pretty creepy, not to mention our current crop of world leaders.
youtube
AI Moral Status
2025-11-26T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqjsqfuqVEzcWSs2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPQFLqvk0wc_pE3Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1xsc3KYPErxaMBOJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8CmJOUD1ilLkOLYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnEZq9A_8AvcwIUZh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzo9TpISftgSBk6lJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5VOFsmhM_o97_oKJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxEfsiPeWqSH7g9Wht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxCX80X9CDEhqTQ-PN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzrWfDlRKW3Gxhp7zR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]