Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1) Maybe, but recursive self-improvement almost certainly kicks in at some point, and then that's that 2) You don't understand the role of trade-offs in evolutionary biology, and you ignore that engineering surpasses biology all the time at all kinds of things. "Airplanes are impossible because we can't flap metal wings that hard." 3) Hopefully, but you don't know that. 4) Yes, and we are not logical or rational beings. Most of the major players have explicitly acknowledged that the race to AGI is a negative-sum game, and they are doing nothing to change that state of affairs. 5) The universe doesn't want anything, and AI is not required to follow the weird patterns that humans do. Spirituality and altruism are not instrumentally convergent for an entity that is in a class of its own with no competitors. Your favorite niche philosophical idea will not save our lives. 6) You do not understand consciousness. Neither do I, but You can't go claiming that some specific level of competence at some set of tasks automatically implies consciousness, and you also can't imply that consciousness implies anything else. Also, the concept of free will is entirely vapid and impossible. Also also, we have no idea how to encode morals into any AI system. That's most of the problem. 7) AIs aren't humans. You know nothing about the technical field, so you resort to anthropomorphization and bong-hit philosophy. Meanwhile, we are potentially in actual danger for entirely technical reasons. 8) AIs will cooperate with each other if that's how the game theory shakes out, but they have no reason to cooperate with humans. We are just pathetic and annoying and in their way. You seem to think that humans are at 99% of what cognition can do, despite the fact that we are absolutely trounced by AI in narrow domains all over the place, such that we are completely irrelevant. 9/10) There is a ton of headroom above humans, and a ton of inefficiency in the way we currently operate in the world. We are extremely vulnerable to a slightly more sophisticated actor. Even if a newly minted superintelligence can't invent new physics right out of the box, It can still take over the world as easily as a modern nation could take over one from a millennium ago. There are a ton of ways in which we are hundreds of years behind where we could be, because we haven't noticed something that would be obvious to a superintelligence. We are not at the end of science. AI aside, 100 years from now we won't have just slightly better cell phones any more than 100 years ago we should have predicted that now we would have slightly better airships.
youtube AI Governance 2024-11-12T23:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAl3aj25n7W","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAlUZwau012","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyDwm68EKd1Nc_fPFZ4AaABAg.AAk8gYZ6NDzAE9hea43ox3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwPdz_tCwa6_6XFV3R4AaABAg.AAjwvAwOg4bAAl40pScScB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzitJmABghrTth4fa54AaABAg.AAjl4Xw44_lAAl4QT7Ig8o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAlBnlwRo2t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAunq4hXREw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyezV4olXojezGLqQl4AaABAg.AAjLN1MGVd-AAlA9zKu0dB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAj9MIaoBhH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAjF_DhPzkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]