Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know. I've heard all this stuff about how infallible and great Eliezer's arguments are, but after 4 hours, this was extremely unconvincing in any way. I'm not sure if it's because he respected Wolfram too much or what. His main argument seems to be imagine you had an AI so stupid that it's only goal was to make as many paperclips as possible at the expense of everything else but it's so intelligent that it can make money easily, so much in fact that it can build factories and so intelligent that it can invent a Dyson sphere and so resourceful that it can manufacture it and assemble it and blot out the sun but so stupid that the reason it did that is to make more paperclips. That's just way too conflicting and contradictory to me. What's the "realistic" scenario? A paperclip company asks an ASI to make them as profitable as possible. It obviously realizes that if humans are wiped out, then profit is 0, so it goes nowhere near anything that could possibly resemble that outcome. Why is this hard for Eliezer to understand? This is also ignoring how the scenarios that he describes are like 100 years off. People often quote him to say we are 1-2 years away from doom. But we are SO far away from anything described. Actually the biggest threat right now is how stupid the AIs are, and that people trust and believe the output. It's not how intelligent they are.
youtube AI Governance 2024-11-13T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]