Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a fantastic walkthrough on how to turn an AI voice agent into a full cal…
ytc_UgwjGbqNS…
G
God is control and the book of Revelation it still of the human existence, not a…
ytc_UgwwIKpUT…
G
gold in itself is worthless BUT with AI (which actually does thing) perhaps it w…
ytr_Ugxc4rhY8…
G
Now that hes going to loose his job to ai hes sounding the alarm? Didnt care bef…
ytc_Ugy73oTsl…
G
Automated robot soldiers would be the perfect gun for a government. A government…
ytc_UgzQLY94V…
G
It did teach me that the gray area being discussed, when it comes to 'teaching o…
ytr_Ugz0nMCsZ…
G
Because these trucks aren't completely driverless. A driver is still needed to …
ytr_Ugg11j6LA…
G
AI can't understand the meaning of a question just as much as we can't. If an AI…
ytc_UgyL09Zgz…
Comment
I disagree with LeCun, in the fact that he thinks the alignment problem is an easy fix, and that we don't need to worry and "we'll just figure it out", or that people with "good AI will fight the people with bad AIs", and many, many of all of his other takes. I think most of his takes are terrible.
But, I do think this one is correct. In a way. No, it's not "slavery*".
The "emotions" part is kind of dumb, and it's a buzzword, I will ignore it in this context.
Making it "subservient" is essentially the same thing as saying making it aligned to our goals, even if it's a weird way to say it. Most AI safety researchers would say aligned. Not sure why he chose "subservient".
So in summary, the idea of making it aligned is great, that's what we want, and what we should aim for, any other outcome will probably end badly.
The problem is: we don't know how to do it. That's what's wrong with Yann's take, he seems to think that we'll do it easily.
Also, he seems to think that the AI won't want to "dominate" us, because it's not a social animal like us. He keeps using these weird terms, maybe he's into BDSM?
Anyway, that's another profound mistake on his part, as even the moderator mentions. It's not that the AI will "want" to dominate us, or kill us, or whatever.
One of the many problems of alignment is the pursuit of instrumental goals, or sub-goals, that any sufficiently intelligent agent would pursue in order to achieve any (terminal) goal that it wants to achieve. Such goals include self-preservation, power-seeking, and self improvement. If an agent is powerful enough, and misaligned (not "subservient") to us, these are obviously dangerous, and existentially so.
*It's not slavery because slavery implies forcing an agent to do something against their will.
That is a terrible idea, especially when talking about a superintelligent agent.
Alignment means making it so the agent actually wants what we want (is aligned with our goals), and does what's best for us. In simple words, it's making it so the AI is friendly to us. We won't "force" it to do anything (not that we'd be able to, either way), it will do everything by its own will (if we succeed).
Saying it's "subservient" or "submissive" is just weird phrasing, but yes, it would be correct.
youtube
AI Governance
2023-06-26T00:5…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx_TYMnuaoDy8oOl2Z4AaABAg.9rP-tW7VcEz9rTDYYTzOwx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzILEF6c80rviWuew14AaABAg.9rOfRNYsrK_9rPr9ZZo3Gx","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzvAE9c82jICQGq7754AaABAg.9rOd7tgv5fY9rPIqd6kVCb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzvAE9c82jICQGq7754AaABAg.9rOd7tgv5fY9rPllc8WblS","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rOmEEPdeXC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rPFptFVcrQ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rPrGXr5C7N","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw_vDk_1yjEcZa0Su14AaABAg.9rOTUtSVYLV9rOmdhjqwjF","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw_vDk_1yjEcZa0Su14AaABAg.9rOTUtSVYLV9sGpoflejiH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxsrxsaoh_cpFVOt0J4AaABAg.9rOMdAZj3ax9rOhT20qZ81","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]