Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Tonatiub the point isn't to stop it now because you can't. The point is that before that point there should have been more discussion and it should have been done more slowly. It is not giving up internet access, it's asking how it should've been implemented in the first place. I can't do anything to stop it, which, again, is kind of the point. The discussion surrounding AI in terms of ethics and its usage was only had in intellectual circles prior to it being unleashed on everyone. This was done by men who were more interested in seeing if they could than whether the consequences would be catastrophic. Even if they are, I'm sure they see it as an inevitable: AI was always going to happen, and it would always turn out badly once it gets clever enough. Instead, I don't think it is inevitable. I think there were multiple points that these people could have listened to the experts warning about developing AI and doing it slowly and carefully and ethically. I think in that scenario, it could be fine. I think in this scenario, where AI is rapidly developing away from anyone who can control it or who cares about humanity above "progress", I think this is not going to end well. Or, it's at least going to cause some major issues beyond just existential crises (see: this video).
youtube AI Moral Status 2023-08-22T01:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tkzhzhnrJs","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tlaIRg87lL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxXOJFDCHRWBaLAHcd4AaABAg.9tiBrsTW1xs9tolVqE31Nw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxZIEB4dCcwMBPANgV4AaABAg.9ti6LzM2dtd9tmznPu-aTw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgylnQMWTaMpgJRPu6F4AaABAg.9thvidc_5sk9thy6ypNVN-","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzZ35g2K9ZPc3JBg9h4AaABAg.9thdKUZX9D09thh3xP61JY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzA7YivRz9cystJLl54AaABAg.9tgyoIiNrvr9thAKk-hYOq","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwiETjJU1w0t9M9g3F4AaABAg.9tgulrfLhNI9thG_OUh64R","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwiETjJU1w0t9M9g3F4AaABAg.9tgulrfLhNI9tiXys6Qu1U","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxiEcqpajbF64Bob0d4AaABAg.9tgfTH1h7n99tgfXB1OdbM","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]