Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The usual argument for this sort of thing tends to stem from a couple of places. The first is the "robot abuse" angle, which is a really bad one because it assigns the AI with things it usually doesn't have (like a sense of justice). The ones that make the most sense (are the most logical) usually stem from an AI extending a command too far. The classic example of this one is "I, Robot" where the robots take over because of the 1rd Law of Robotics "A robot may not injure a human being, or, through inaction, allow a human being to come to harm". Because humans hurt each other all the time, the 1st law could be interpreted as "A robot must take action to stop humans from hurting each other and themselves". And since the only way to do that is to be in charge, the robots take over. But both of these require a sense of scale and the ability to solve ill-defined problems that just don't exist right now, and that we don't even know how to begin to tackle. Personally, my take on superhuman AI has always been that it will see either cooperation or manipulation as the most logical path forward. A superhuman AI would know that we would be afraid of it, so it would be doing everything it can to either make our lives better, or to manipulate us without our knowledge. Any other option leads to more conflict and is thus not the rational choice.
youtube 2021-11-13T01:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugz70OAVGz0ggTxQuZt4AaABAg.8osy3Uibkm28q-uY6T3EOd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz70OAVGz0ggTxQuZt4AaABAg.8osy3Uibkm28q-yo0I6tnN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugwnoag8Gm7RFExqKkd4AaABAg.8kdh6gp4smC8r6dNk4rts7","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"sadness"}, {"id":"ytr_UgzkX7-f2hjnif1t9iZ4AaABAg.8kLwEsKq6D_9UfCChpNb-q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugx3OblhzaUeVDo5D-J4AaABAg.8izg1VUadYX9A5l-CC0yRb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugwd2JMacA-TqaWiO-Z4AaABAg.8heow8kKVHh8zwt6b1BnGg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz0wTo0j6ijGxQkPXp4AaABAg.8gxzp6NIltB8hanVSB5W3M","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyZYda85tBp1GUREC54AaABAg.8gD04kbRe5-8wOvKTcJcjT","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyfnBJ2M1YJEY2TRXt4AaABAg.8f1l2EdwkIs9TBQBXNvAGz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugzx-9max9pt05UNuK94AaABAg.8emd0VUQttc8eoBBPJT-HX","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]