Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder is it possible to have a worldwide non-profit organization that can hel…
ytc_UgyvKQ-1E…
G
"You Have No Idea What's Coming" Now we know Ai is a fraud and going bankrupt…
ytc_UgxZNE0KJ…
G
Digital art is still actually making the art, your hand strokes and creativity r…
ytc_UgzjAzPai…
G
@BlueScreenCorp you are wrong lol, neural nets died in the 90s and came back in …
ytr_UgyjJ-3Vo…
G
Lol that sounds like a super uninformed leadership. Don’t get me wrong CoPilot i…
rdc_jpskvj5
G
In a few years they will have this technology down.
Real women will have kitt…
ytc_UgwGgIWD6…
G
Seriously fascinating. I’m more interested in what it can write in collaboration…
rdc_jdj6alg
G
Before thinking robot rights, there's still a lot to talk about rights. Gay righ…
ytc_Ugww-LRZh…
Comment
This conversation is unproductive.
Morality, value, pain, suffering, and consciousness apply only to living entities. Only beings whose existence depends on acquiring life-sustaining values have use for them. For nonliving things, these concepts have no meaning.
Consciousness implies a need for knowledge. A conscious being must understand the world to identify and acquire what it needs to survive, which presupposes values. Gain or loss, pain or pleasure are evaluations of success or failure in securing those needs.
Morality, as the rules by which one must live, applies only to entities with a conditional existence requiring correct choices. Nonliving things have no such condition and no use for these principles.
This is the issue with discussions about AI: a misunderstanding of life and consciousness. Artificial intelligence could, in principle, be alive, conscious, and have moral status, but only if it is engineered as artificial life. That is the precondition. We are not there. Until then, these discussions are a dead end.
youtube
AI Moral Status
2026-04-04T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx_kyDlk9eMHRRXHuR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVtENkOmqUJ8PPgrZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxyONqzabbQShv_J_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkMCFvW9QSbT27j4V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2TAdkJNBkXgtEHQR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-lhyM_ZAdszAzRR14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKJ1pa5AQo59Q4ODx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzH2AttDUf9EifvHYB4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2V6VaKCzXyTIR94Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX9HxdzZEN0I6L1bB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]