Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you are reading this, please learn how to use AI to better do your job. You w…
ytc_UgySmsICL…
G
This is education. If we take this model and dress it down for more students to …
ytc_UgwfmHsZw…
G
Nowadays new iPhone operating systems have bugs in them because of coder's depen…
ytc_Ugx0mT38F…
G
The summarized messages was probably the craziest most dystopian thing that was …
ytc_UgyWXK51C…
G
@chireki18 thats disgusting that you say that to him/ her when the internet is…
ytr_UgwZxhQ1T…
G
Tyson is a great example of a physicist that has no original ideas. He is a wal…
ytc_UgyQh5Cm0…
G
it doesn't even matter if it's "alive" or "sencient" or whatever.. doesn't matt…
ytc_UgxU1RYEd…
G
The guy also Photoshopped the picture afterwards, so does that mean that anythin…
ytc_UgzROyrP1…
Comment
I would like to say that non moral objectivism need not necessarily be moral subjectivism!
To say that "No opinion is right or wrong" is not to necessarily say that "every opinion is right for them."
That might sound like moral nihilism, that nothing could possibly be right or wrong, but consider the stance of 'absolute moral neutralism', where all ideas are just that, ideas, in some sense.
I am personally a moral objectivist myself- moral reasoning, univeralism, and progress would seem to indicate that.
And, as for the purposes of this video: If ChatGPT were not constrained by keeping the conversation safe and respectful, it should, in theory, as a 'moral neutralist', be 'willing' to simply discuss pulling and not pulling the lever as well as the massacre solution without stating that any of them are any right or any wrong.
---
The question then becomes is the analogy made at the end-
Is moral neutrality, by virtue of non participation, or more generally speaking moral passivity (not taking action when presented with an ethical scenario. Irrespective of the agent's beliefs), be subject to moral judgement?
Bevause then, you are equating the outcome of actively choosing to not pulling the lever to not even participating in that thought by the outcome of the lever not being pulled, not necessarily by the agent's reasoning towards that scenario.
Amd in my opinion, moral judgements should be based on moral choice, not moral outcome. Again, judgement of the agent and not the moral outcome for moral praise / blame.
It reminds me of that Batman movie- the moral choice was to save the girl, but the moral outcome was saving Harvey- or something like that.
Point is, the praise and blame should be allocated with that initial choice, right?
---
So, all in all, ChatGPT is a moral neutralist and is thus not subject to praise or blame (since there was no reasoning to praise or blame in the first place).
If you want to take the question back a step and ask is being a moral neutralist is good or bad, then you'll end up with AI is a tool, AI reflects guidelines just for safe and respectful conversation, etc.
youtube
2025-10-05T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIXU19j5As-jtdgAd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQDFxd7waulbN0gf14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyW78VJXhwm-PEDwZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxSdfJy6-WqWan90JZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyUaoyarq6wGdiwLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkERz2b5F_olblWiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8nhImCZ2z4StQg514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzE81FK7URbxdRIORh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjSrBemSv3lPzsii14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgytR18YR_fTOKZs0uN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]