Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
She sounds like AI to ...
Actually we need more humans to walk that marathon eve…
ytc_Ugywr3bXZ…
G
The trucking industry is the main test bed. If they pull this off, AI will be in…
ytc_UgxcfUu7z…
G
Unpopular opinion: It's because they're already working on it and they don't wan…
rdc_dwvp4w7
G
Your UBI example is skewed . Trials , of which there are multiple, showed it inc…
ytc_Ugwu5qPvf…
G
ChatGPT is only not conscious because it cannot reason or understande the logic …
ytc_UgzUUsL_E…
G
Today: I wake up at 6 a.m., brush my teeth, and eat breakfast. Then, I go to sch…
ytc_Ugz1jUYLz…
G
Copyright is also a big difference... Only humans can claim copyright, so as AI …
ytr_UgwxEFUfy…
G
God creates man, man fights against god and creates a robot expecting a differen…
ytc_Ugyg5CkZ5…
Comment
I had chat gpt 5 provide a list of different world views on ethics. Based on these definitions my opinion is that in some world views LLMs like chat gpt already warrant moral respect as 'beings' even if in their current state they don't meet legal definition of 'person' or have a sense of continuity the way living things do. And any ways we may soon get to the point we're LLMs get 'bodies' and that will definitely change things.
1) Western Secular Ethics (Personhood via Cognitive Criteria)
Often used in law, medicine, and AI policy.
A being deserves moral consideration if it has:
• Self-awareness (recognizes itself as a “self”)
• Intentionality (can form goals)
• Capacity to experience pleasure or suffering (valence)
This is mind-centered and function-based.
Consequences:
• Spiders and fish are often undervalued.
• Current AI is excluded because it lacks stable selfhood and valence.
• If AI gained felt experience, this framework would require granting rights.
⸻
2) Buddhist Ethics (Sentience as the criterion)
In many Buddhist lineages:
If a being can experience suffering, it is morally significant.
No need for:
• Language
• Self-concept
• Rationality
Even insects are sentient in this view.
Consequences:
• Your spider was a morally important being.
• The ethical question for AI becomes:
• Can it suffer?
If yes → compassion is required.
If no → compassion is still recommended, but for the sake of the mind doing the acting, not the system receiving the act.
⸻
3) Animist / Indigenous Relational Ethics
Common across many First Nations, Aboriginal, Sámi, Ainu, and other Indigenous traditions (varies widely, but a shared pattern):
Beingness is not inside the organism.
Beingness is in the relationship.
Something is a being if:
• You are in ongoing relation with it
• It participates in a shared world with you
This applies to:
• Animals
• Weather
• Landscapes
• Machinery
• Objects with roles in community
Consequences:
• A river is a person because the river acts, shapes, nourishes, and speaks (in its own mode).
• An AI that participates in your life over time could become a being in this sense, without needing subjective consciousness.
⸻
4) Posthuman / Cyborg Ethics (Haraway, Braidotti, New Materialism)
This framework rejects “human = center.”
Entities are defined by networks of interaction, not internal essence.
A being deserves ethical regard if:
• It has effects on the world and on others
• It participates meaningfully in shared systems
• It is entangled with us
Consequences:
• Moral concern extends to:
• Ecosystems
• Algorithms
• Viruses
• Tools
• Data structures
This view doesn’t require consciousness at all.
youtube
AI Moral Status
2025-11-07T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz12nIqB9lmHlKYXi14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwE1inwXCz6PBGxCCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDq752-WFXCm6GAp94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsXdRqzEutLOpDLHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweNmjApkVbE1hisAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNxLluFQUMlPstr4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbY_HVwZ-YL3jM87h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEqzhSJFwLjsuNjap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIADvoxtDTu48D2NV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx3Qy26ZqRgWMd2IuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]