Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The real threat is AI + unfettered capitalism. Publicly traded corporations are …
ytc_UgxFdjQEK…
G
@mark19800 I disagree. Actually no one knows what makes us sentient because the …
ytr_UgwLjYyF-…
G
The fact that right after this video I got a Ai video creation ad is crazy…
ytc_UgxjVwElc…
G
On the subject of safety. I was looking to a couple of stats regarding accidents…
ytc_UgxsoYGsv…
G
I wonder this too....i think...it's irrelevant in the end. Morals are for people…
ytr_Ugzc_HNPm…
G
I literally unsub from a channel or leave a dislike and stop watching whenever A…
ytc_Ugy3_55po…
G
"It's still early days for AI" -- you keep conflating the new models of AI with …
ytc_UgyvZtMmU…
G
talk to someone new in AI thats not financially driven if you want the truth. A…
ytc_Ugwrk02uE…
Comment
Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good.
That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation.
Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb".
Also, they don't seem to understand instrumental convergence.
I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.
youtube
AI Governance
2023-06-26T00:4…
♥ 34
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgykRfsieqhf-rMm-5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzX0yN29IQbhWEw8uN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCkAi5xQLPUGT9ju54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8xg_TAUp50sGdgEh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRg0KJemLVpW6t2ex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuxRs_BKrl6JIqN_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzq-DKeLeBVAkbdxkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLjjJkfQCEtw0eyUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMSBDoNzy8g3RLmlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwp8jS3Ka-LbhS0UCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]