Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This year marked 2 decades of my career in software development. I was reading about AI/AGI/ASI from the middle of 2000 onward. I read A New Kind of Science by Wolfram when it was released in 2002. I was present on overcomingbias, then when lesswrong was created. I read and participated in some of all those conversations about AI risk. I was AI pdoom=100% long before 2020. I know all the arguments,... If you asked me in 2020, if AI will kill us all, I wouldn't hesitate to say "yes, 100%." We were playing with genetic algorithms (wiki with the same name if you don't know) from ~2000 on old Pentium IIIs. That said, in the last 2 years, I started to have a feeling that there is something wrong with the paperclip maximizer argument. Just a slight change in my own thinking. I don't have the IQ to present a counterargument, because I know all (OK, most) counter-counter argument from Yud. I was hoping Wolfram would present it. And I have the same feeling he has that there is some limit, boundary, that is a part of our base reality, that even ASI can't overcome. I'm not saying AGI can't wipe us out with viruses and such. I'm saying I think there is something that is "base reality" that prevents "paperclip maximizers." And I'll repeat again, because I know most don't actually grasp what Yud is saying. Yud is saying that mere optimization of any goal, however benign, can wipe us all. That AGI/ASI is so fundamentally different, it's 100% it happens. His logic / logic steps are solid. But I really hope Wolfram will think about it and present an explanation that doesn't fall into the trap "I just can't imagine X." When he replied with the alien engine analogy, this is a kind of thing I expected from him. Thinking differently, from an angle that does not come from "human values." So for me personally, this was a fascinating conversation.
youtube AI Governance 2024-11-13T17:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]