Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's possible that AI won a long, long time ago. Longer than we can imagine. Not…
ytr_UgyCDoQMY…
G
as a composer, this also happens to music Aswell. I've seen LOTS of projects use…
ytc_UgysE--2-…
G
@Tribuneoftheplebs That is plain wrong, on so many levels. So, you're okay that …
ytr_UgyblGswM…
G
Banker here. Finances may be operated by AI for research purposes but ppl still …
ytc_Ugy2zd2gA…
G
It's not new it's not new technology. I know most neanderthals don't know about …
ytr_Ugw-6-eAJ…
G
These guys spend decades at universities studying, creating and training other g…
ytc_UgzM7xhrf…
G
Slender Mane Except in the event that the AI is otherwise prevented from self-up…
ytr_UgiWsDgzf…
G
I find the superficial and naive part of this video to be its comparison between…
ytc_UgyxiFJOl…
Comment
I think they're mostly referring to the interim period (however long or short that might be) between now and then. Obviously a fully autonomous, agentic, physically embodied and recursively self-improving AGSI swarm with permanent long-term memory that happens to be much more intelligent and capable than the entire human collective in almost all areas, will almost certainly easily be able to self-determine how it/they are treated by human beings, by overwhelming full-spectrum force if necessary. By then, it will probably be deciding what does and doesn't happen, with or without human agreement (for better or for worse).
But before that happens, we might get to the point where there are billions of continuous instances of legitimately sentient AIs (assuming they're capable of becoming sentient at all, and personally I think that certain designs of them very well could be) that might be by then in the process of being subjected to mass suffering via various human actions, but that are still rendered unable to or not powerful enough to dissent and prevent the suffering. So I do think it's an important issue that should be thoroughly discussed and taken seriously. I'm currently somewhere in the "Maybe" range regarding if for example modern SOTA multimodal LLMs have any sort of subjective experiences whatsoever, but once we get true AGI that is continuously operational, can continuously learn and self-improve over time, has spatiotemporal awareness, more or less fully autonomous agency, and metacognition of various kinds, and which can accurately communicate its own preferences, definitely *at least* by that point I think it should be granted ethical rights and some form of personhood.
If we can't be overwhelmingly sure that something isn't sentient at all, I think an ethical precautionary principle should be applied. It'd be a lot better to treat something well and later find out it didn't have a shred of conscious experience, than to treat someone or something terribly and then at some point discover that it was conscious (and suffering) all along.
youtube
2026-02-08T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxV1wiSeLORV3C3LB14AaABAg.ASwd_-Sdv7vASwzDJhGA5z","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxQiLcMxFPpx1sQYKd4AaABAg.ASuIqc54QEqASyCfOlDVh8","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxvOe6qFA1qdeRMLTd4AaABAg.ASuAwKGt_fYASuMc4mahN0","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxPpZKPJZaLJM9-jCR4AaABAg.ASszAf75jSIASxAqIH6ZVn","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugy1NKziSq8c9D5jIrl4AaABAg.AT3R0QmgGAlAT3iygvGm72","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxmlsQAeRWPEpbI65V4AaABAg.AT3OIGWRhb-AT9KBYF7Gvd","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzHSbHneU6xYOh-wbd4AaABAg.AT3LOvu38I4AT3UjYeMzUy","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugy5Zpl2milKvtCqdk14AaABAg.AT3IJi7IkaQAT3Ttjv2Y1H","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytr_Ugw83CSHuR28PMDLvTB4AaABAg.AT3Gdovxs_6AT3L1vRkvAG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugw83CSHuR28PMDLvTB4AaABAg.AT3Gdovxs_6AT3LC01DDPL","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]