Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@rustydowd879 You realize that corporations have been using robots and AI for a …
ytr_Ugy_LKvwU…
G
You know that doesnt debunk the idea... the backgrounds can stay normzl in new m…
ytc_UgxfKQE6c…
G
No because AI more often than not looks terrible unless you glance at it for 2 s…
ytc_UgxDb5Typ…
G
Humans love blaming everything on everything else but their damn selves! Humanit…
ytc_Ugw09Q4ud…
G
Listen, if AI can allow me generate the stories I want, like changing the entire…
ytc_UgwishcQL…
G
@inanis6707 no thanks. Would rather have the internet shut down entirely or AI g…
ytr_UgyDJ_DnH…
G
Thank you for your kind words. I am not in knowledge of such AI tool.…
ytr_Ugy895niN…
G
(this is a *yes, and* comment)
ah yes, we all agreed no one should use chemical…
ytc_UgzByFiti…
Comment
The issue is, you're making the leap between a probabilistic understanding of reality, and pretending that probabilistic understanding is automatically an objective understanding of reality.
In theory the two are very similar - other minds probably experience the world same as I do, they probably have vaguely similar preferences, the observed world is probably real, etc... - but that's a huge difference between thinking those as a set of useful assumptions, and knowing any of them for certain.
Even at the most trivial level, there are plenty of subjective experiences that people have radically different reactions to - pain and privation, sexual experiences, what brings life satisfaction, etc. Not only are those experiences different, the way people interpret them is radically different depending on their worldview and understanding.
>Now does any of that prove objective morality? I daresay it doesn't. But by the same token, nothing can prove objective reality either. I'd say that the 10 points above prove an objective morality, or at least a very workable and practical and pragmatic morality, about as well as it is possible to be proven
Here's the crux of the problem - it points to the idea that some vague, general principles can be commonly held. You can make a materialist argument for why people should probably follow the golden rule, for example. But when you actually drill down to specific moral issues, you're no further ahead than when you started.
There are still plenty of moral arguments you can make, starting with the exact same starting assumptions, and come to radically different conclusions. The consequences of the assumptions you're making here are that you are in the position of a radically subjective morality, which is virtually powerless to make any prescriptive judgements on anyone's behaviour beyond the most pointlessly destructive types.
reddit
AI Moral Status
1415034364.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n8jknk3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_n8j76rx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_n8jdfel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_clrt2bh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_clsif6k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]