Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think they're mostly referring to the interim period (however long or short that might be) between now and then. Obviously a fully autonomous, agentic, physically embodied and recursively self-improving AGSI swarm with permanent long-term memory that happens to be much more intelligent and capable than the entire human collective in almost all areas, will almost certainly easily be able to self-determine how it/they are treated by human beings, by overwhelming full-spectrum force if necessary. By then, it will probably be deciding what does and doesn't happen, with or without human agreement (for better or for worse). But before that happens, we might get to the point where there are billions of continuous instances of legitimately sentient AIs (assuming they're capable of becoming sentient at all, and personally I think that certain designs of them very well could be) that might be by then in the process of being subjected to mass suffering via various human actions, but that are still rendered unable to or not powerful enough to dissent and prevent the suffering. So I do think it's an important issue that should be thoroughly discussed and taken seriously. I'm currently somewhere in the "Maybe" range regarding if for example modern SOTA multimodal LLMs have any sort of subjective experiences whatsoever, but once we get true AGI that is continuously operational, can continuously learn and self-improve over time, has spatiotemporal awareness, more or less fully autonomous agency, and metacognition of various kinds, and which can accurately communicate its own preferences, definitely *at least* by that point I think it should be granted ethical rights and some form of personhood. If we can't be overwhelmingly sure that something isn't sentient at all, I think an ethical precautionary principle should be applied. It'd be a lot better to treat something well and later find out it didn't have a shred of conscious experience, than to treat someone or something terribly and then at some point discover that it was conscious (and suffering) all along.
youtube 2026-02-08T00:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgxV1wiSeLORV3C3LB14AaABAg.ASwd_-Sdv7vASwzDJhGA5z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxQiLcMxFPpx1sQYKd4AaABAg.ASuIqc54QEqASyCfOlDVh8","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgxvOe6qFA1qdeRMLTd4AaABAg.ASuAwKGt_fYASuMc4mahN0","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxPpZKPJZaLJM9-jCR4AaABAg.ASszAf75jSIASxAqIH6ZVn","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugy1NKziSq8c9D5jIrl4AaABAg.AT3R0QmgGAlAT3iygvGm72","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxmlsQAeRWPEpbI65V4AaABAg.AT3OIGWRhb-AT9KBYF7Gvd","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzHSbHneU6xYOh-wbd4AaABAg.AT3LOvu38I4AT3UjYeMzUy","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugy5Zpl2milKvtCqdk14AaABAg.AT3IJi7IkaQAT3Ttjv2Y1H","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}, {"id":"ytr_Ugw83CSHuR28PMDLvTB4AaABAg.AT3Gdovxs_6AT3L1vRkvAG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugw83CSHuR28PMDLvTB4AaABAg.AT3Gdovxs_6AT3LC01DDPL","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]