Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your feedback. If you're interested in more interactive sessions, …
ytr_UgyToxS8q…
G
We really need to update both international and domestic law to make any qualifi…
rdc_o78p5av
G
by interrupting what am i doing? can ai revolt beyond hard coding? and if it is …
ytr_Ugw5_2-wz…
G
You only needed an introductory level machine learning course to predict this. I…
ytc_UgxiCF_aI…
G
No way will AI will wipeout working class. It will create a different types of w…
ytc_UgwDht1H7…
G
this is a bad idea well good luck surviving the robot apocalypse new generation …
ytc_UgzsJQDO3…
G
AI Yai Yai Yai Yah !! a daily expression from the working public especially in t…
ytc_UgxVj8bh2…
G
Man Terminator really locked in the cultural idea that a Superintelligent AI wou…
ytc_UgwTRvZUr…
Comment
Sophie's dilemma isn't a real dilemma. That's sadism and of course there is no right answer, all of the answers are wrong. That's why it's sadism, which is what we call evil. That's not Sophie's Dilemma, that's the Torment of Sophie. It's not a moral dilemma, it's only torture. Events such as these don't reflect realistic choices that we make, such as whether or not to stop contributing to the mass torment and murder of innocent victims - animals.
I agree that there is not right choice in certain circumstances. Because it doesn't matter which choice you choose, it's equally as tormenting either way. That's sadism.
Sophie is not being given a choice. There is no moral dilemma. The nazi's are choosing for her, either way. Sophie does not have any choices in this situation. The choice is between salt and salt. That's not a moral dilemma.
When we are faced with situations where we have real choices, there IS a right answer, morally and ethically. We should be choosing compassion every time, unless you absolutely can NOT survive by making that more compassionate choice, and in that situation, it's not really a choice. In fact, it's not really a choice either way, we should automatically be choosing the most compassionate answer. If both answers are equally as terrible, then that's not a choice. There is no difference in either situation.
There's some quote that goes like, "Once I saw my father give money to a person begging in the he street. I said “ Why give this person money when we both know that he is going to buy more liquor “ my father said “ the act of kindness is in the giving, not what happens to the gift“" so what it comes down to is that, when we actually have a choice, there is a good choice to make as opposed to a bad choice. What happens after that isn't OUR choice. We don't control everything.
reddit
AI Moral Status
1584236579.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_fkj5amz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jha5xk2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_jha8gya","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jha73cd","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_jhac41p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]