Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We’re all going to be so sick of AI soon. We’ll be trying to avoid it, installi…
ytc_Ugy00O0i0…
G
Bro really thinks people who go to war automatically have training pre installed…
ytc_UgzJqQVck…
G
@jacappa1635 Thanks for watching the video! I hope you weren't expecting the rob…
ytr_UgynULLsA…
G
googles Gemini is a failure(so far) and isn't doing what they claims!!! AI is a …
ytc_UgxehfXmp…
G
That's not at all how that works.
Robots have no way of doing any of that. The …
ytr_UgzByUwBi…
G
@aerindinescarro47 Well, I don't particularly hate it. But I still bring it up j…
ytr_UgwynO-a0…
G
lmao you dont need super advanced AI to program NPCs. a handful of simple algor…
ytc_UgznLPXVX…
G
Due To The Fact,
The People Refuse To Change,
AI Needs To Take Control.
Greed & …
ytc_UgyHtWns-…
Comment
It's even worse than you portray.
On the AGI "ground rules" that you mention:
1: No external connection/sandboxing it is not effective. At all. Also, even if it was (it's not), we won't do it. We aren't doing it. The first thing we do to a new AI is connect it to the internet, pretty much all the times now. And we won't know when the next AI we make will be AGI, so we won't know that we shouldn't connect that one, but "this one is fine." Even if it lets us test it after we make it, and possibly before we connect it to the internet, if it's smart enough, it might pretend not to be, as you mention, a sneaky fuck.
But to go back to why sandboxing won't work: it's a superintelligence. Any vulnerability we present it with, it will be able to exploit. Every output it has. And there is no way to not give it any output, because it needs to communicate to us to be of any use, otherwise it's just an expensive paperweight.
2: Not letting it "out" implies sandboxing, which is not effective.
3: Assuming everything it does is something to manipulate you. Sure. You can be as cautious as you want, but you won't outsmart a superintelligence. The ONLY way to survive if it's misaligned, is to not build it at all. If you build it misaligned, we're all dead.
So, if we want to survive, the options are two:
1: Wait until we properly and confidently solve the alignment problem, before attempting to building the AGI.
2: Don't build the AGI.
And guess which one we're doing now?
Correct, none of the above. We haven't solved the alignment problem, and we're building the AGI anyway. Smart humans.
Also, another mistake. You mention that even if the first one is aligned, what about its successors? That won't be a problem. Once alignment is solved, we have a superintelligent AGI. If it's superintelligent, it will certainly know how to align new iterations of itself better than we ever could. Once alignment is solved, we win. That's why it's the most important problem in human history.
youtube
AI Moral Status
2023-08-21T01:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhScuUOtRFTabR0C14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8StKi1iYEHSuEgJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyVUkr6ZObxsAJ2ihh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu0SKI6PvNLxswvdp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxP2zJ3Lp0FMzXQw14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5M3Li_xQNfbuYT0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0M9aUKL_PY_lQmtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwZ7-7g4UwpKu3h1IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy6M12eZ2hA9Aj4yB14AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2a00CIqW6yOxiLPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]