Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Melanie and Yann seem to completely misunderstand or ignore the orthogonality thesis. Yann says that more intelligence is always good. That's a deep misunderstanding on what intelligence is and what "good" means. Good is a matter of values, or goals. Intelligence is orthogonal to goals. An agent with any amount of intelligence can have any arbitrary goals. They are not related. There are no stupid terminal goals, only stupid sub-goals relative to terminal goals. Bengio briefly mentions this, but doesn't go very deep in the explanation. Melanie mentions the superintelligent "dumb" AI, thinking that it's silly that a superintelligence would misconstrue our will. That is a deep misunderstanding of what the risks are. The AI will know perfectly well what we want. The orthogonality thesis means that it might not necessarily care. That's the problem. It's a difference in goals or values, it's not that the superintelligence is "dumb". Also, they don't seem to understand instrumental convergence. I would love to have a deep discussion with them, and go through every point, one by one, because there seem to be a lot of things that they don't understand.
youtube AI Governance 2023-06-26T00:4… ♥ 34
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgykRfsieqhf-rMm-5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzX0yN29IQbhWEw8uN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxCkAi5xQLPUGT9ju54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8xg_TAUp50sGdgEh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwRg0KJemLVpW6t2ex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuxRs_BKrl6JIqN_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzq-DKeLeBVAkbdxkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxLjjJkfQCEtw0eyUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMSBDoNzy8g3RLmlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwp8jS3Ka-LbhS0UCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]