Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They want ai controlled federally, but Education, Womens medical care, FEMA, an…
ytc_UgxpPCvTa…
G
AI only references art work, just the exact same way an artist references other …
ytc_Ugynf0IsR…
G
I cilcked on this sound and it showed a ai showing the "last day of earth" it wa…
ytc_UgxXIbqTl…
G
For his safety I think, there is an interview (forget which one) that shows one …
ytr_Ugxsnrs2I…
G
Yeah, it is your right to harass other people just because you had a bad day. Do…
ytr_UgxK8nUn8…
G
Completely different. We are talking about the careers people want to have. The …
rdc_jtz8426
G
Like one of my favorite comediens once said "What I like about Grok is that occa…
ytc_UgxeFAqzx…
G
@sammypsychosis1674give it a go yourself. Trust me, I can make an llm do, almos…
ytr_UgzQZqlbU…
Comment
Agreement:
I certainly sympathise with some of your arguments about how the focus on AI-rights or Robot Rights, once an allegory for existing oppression of marginalised communities during industrialisation, might be used to obfuscate existing rights issues, namely human rights abuses and exploitation of ethnic minority communities. And, that we can't ignore how now they are being used largely as tools of domination or inequality rather than mostly subjects of domination. Much of our anthropomorphic understanding of AI/ML/Robots and this trend of robot slurs has also been shaped by historical allegories of robot exploitation and you're right that these trends might be indicative of culturally familar expressions of discontent towards perceived AI domination.
Point of Contention:
However, I feel a bit hesitant to agree with you on your conclusions about AI/Robot moral patienthood. I don't think we can be so certain that robots or AI systems are incapable of experiencing or deserving rights, and that it's unwise not to consider it in the slightest. I think we've historically underestimated the emotional and cognitive capacities of non-human animals to justify cruelty and assumed humanhood was equal to moral patienthood and we've deprecated the value of non-sentient nature to justify seeing using it as an endless resource to plunder or dominate rather than a condition of life with intrinsic value. It seems premature to dismiss AI, especially as its growing and AGI might emerge, and its rights as it might deploy consciousness in the future. That’s also ignoring that we might have a sentience bias. Perhaps we have obligations to non-sentient life.
We also can't really use anthropomorphic, agentic language to label robots or ML-algorithms as the bigots or dominators even if they might be used as instruments of bigotry if we're going to deny that they are conscious or agentic. It is our biased societies and tech companies which are bigoted and the AI is just mirroring that. We can't project our shortcomings onto non-agentic machines. We need responsible tech companies and environmentally-conscious tech not AI witchtrials.
And I think the robot slur memes are morally questionable for a multitude of reasons. Some of it feels like racist dogwhistling given a veneer of innocence or acceptability. And, assuming they do develop consciousness it will just be bigotry akin to speciesism. Even if they aren’t ever sentient, it still reflects on us poorly as how we treat simpler and more vulnerable beings like non-human animals, or even non-sentient life like nature express things about our character and how we might deal with each other or more vulnerable existence.
youtube
2025-10-08T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyHQdLBGQnbG9XrN254AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2-U8V_1q-TWjUPq94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzc_kbYPP3J64STzy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEVCLRlTMLKUUwh214AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYcdap3hipnwL8NOB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZya8GCYlGmIy1E-t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxt30yf0E4kLC9Kltp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzPLHzLfnfp8BFo8Vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLoSrixqg5Oi_3bb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNcoeJd3YSuEbyqYl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]