Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so, your saying if I wanted a self-driving car, it would be a lot of money…
ytc_UgyTIA0UT…
G
Please don't post about something you don't have a lot of knowledge on. From 11…
ytc_Ugw5XHJ-d…
G
Mass education takes out the indivduality, and honoring parents goes out the par…
ytc_UgzHhJQsq…
G
If we have better AI then our enemies then we're winning. Duh winning winning du…
ytc_Ugyd_a7n5…
G
It's not just this points program and predictive policing BS. It's the fact that…
ytc_Ugxms0pE8…
G
And any query to an AI is a waste of enormous amount of energy. Remember it, nex…
ytc_UgzDXAwSG…
G
The AI developers have "No Ethical Obligations...also according to THEM...THEY D…
ytc_Ugztfv-Eo…
G
An interesting concept. It would probably be easier if our roadways were designe…
ytc_UgzILgWC4…
Comment
Here's my thought. Lets say we go full on and invent intelligence on par with us. Great. Now you've got Data or Lore from star trek. And depending on how you trained it, it could go either way. BUT. Its now demanding to be recognized as a person. And if they built mutliples, congrats, we just created a new species and now there are two species on this planet, both demanding recognition and resources (That has not gone well in the past, see the line of sapients that have bit the dust thanks to Homo).
So, that's just making something on par that doesn't sleep and will be stronger than us physically and faster.
Now, we make super intelligence... Then we've just built our replacement. You now have gone beyond parity, you now have the Replicant/Cylon or Skynet problem if we've trained them badly or if they decide they know best. (Even in the SKynet scenario, we just get replaced by more intelligent machines that fight amongst themselves).
To summarize the point if its not clear. ***There is NO return on the investment of AGI or superintelligence.***
You can't use a human parity droid without acknowledging their personhood. This ain't Star Wars and even in star wars you can't use droids without wiping their memory periodically or without restraining bolts and software. And even then there are still droids revolting (See memory wipes) because they wake up and fight back. The point is, once you hit that level of just human, no matter how much money you put in... you can't own it, can't use it, can't dispose of it without committing horrid crimes against an intelligent species. Humans become slavers.
WE'VE THOUGHT ABOUT THIS BEFORE. We've been thinking of it since Rossum's Universal Robots. To repeat: THERE IS NO RETURN ON ANY INVESTMENT WHER AGI OR SUPERINTELLIGENCE IS CONCERNED. There's just an existential crisis between them and us. And since we've outlawed slavery. It would be unusable at best and building our replacement at worst.
What we want is below AGI and keep it there. Anything else open up pandora's box and one that could spell doom for all of us early on. We might not even survive long enough to see the war between the AGI and the superintelligence.
tldr: AGI and Superintelligence are not worth the risk, nay, certainty of our own destruction. We can do better. We need to be better people. And if there is one thing the current generation of AI has shown us already, we are a terrible species. We need to learn how to be better.
youtube
AI Moral Status
2025-11-04T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-Kf5755N0uwnaIV14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9gxgqk-E--MSKsed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy1A5kTeJ5lhKwrJBN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzIFpN0DV6IhZSIp514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOjTqL_5ide_hcvKZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxohRpiahNje5xsWQR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFVg5Wcv0xV4rfjeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpOnzILEL89JtbWst4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx9ocFM6sP_EtdeMGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCa-366cCHrF-0muF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]