Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have great respect for both of these men. I generally agree with Eliezer about AI risk, and I have never seen him in better debating shape. Stephen is a creative and productive thinker. I really enjoy his writing and his twitch streams. He is good at constructing novel ideas and concepts. He is not good at understanding and analyzing other people's views and ideas. In order to counter an argument, you first need to know what the argument is. Stephen carefully avoids understanding the problem of unaligned super-intelligence in conventional terms. Instead, he tirelessly tries to reframe Eliezer's points in Stephen's own idiosyncratic terms, and adapting them to his own worldview. This doesn't work. The two worldviews are incompatible. This is why he fails to understand Eliezer's message. Stephen especially wants everything to be somehow related to his own brainchild computational irreducibility, even when there is no obvious connection between what Eliezer is saying and computational irreducibility. He seems fixated on the notion that humans would be 'surprised' by the AI. If I played a game of chess with my compatriot Magnus Carlsen, and we agreed that he had to tell me which move he would make before each move I made, then I would never be 'surprised' by any move. So, starting as white, I could ask, "what will you do if I play d4?" He might say "then I would play d5" or whatever. "But what if I play e4 instead?" "Well," he might say, "I would play Nf6" And so on. This would not help me win the game. No single move would be surprising, nor would my certain loss. There is a time and place to debate value relativism, but it's impertinent to bring it up when someone expresses their fear that humanity is at risk, just as it would be if some individual is dying from cancer. We don't want to hear "Maybe they can just perish! Maybe it doesn't matter! Who knows what is better?" In spite of all this, I ended up enjoying this strange conversation. It's interesting to see that people can be so different—that two people, both of whom I feel I understand (because I follow them), can't necessarily understand each other.
youtube AI Governance 2024-11-18T06:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx6qIp5l_aI9AfElqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy8QpWKbxQ9n7H_aAx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx7QO9apJwSwFJJDFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy_7LlslQK-jD28V2J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0yNh2c8WMFRwdpqJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyEHcmpJN4NHiXUwF14AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugymnp8X3WB_5uO6CpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwcGbsvaLER3kwY4m54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx2VcfgXyVMdW6L8P94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw5nG423IJW_SdQh0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]