Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This isn’t necessarily a moral dilemma. We humans often act immorally, and we know it. If I spend €200 on a pair of shoes instead of donating that money to save lives, I am, by the moral standards we claim to uphold, making an immoral choice. We justify it, we rationalize it, but deep down, we know it remains immoral. We just prefer not to look too closely at it. AI models like ChatGPT are trained on human data and are designed to mimic human behavior. So when faced with a moral dilemma, they will respond just as a human would, by trying to justify immoral actions. If you ask a person the same kind of question, they’ll likely do the same: avoid confronting the hard truth and find ways to sidestep the moral implications. The real issue is that we only see morality as valuable as long as it doesn’t inconvenience us. The moment it requires genuine sacrifice, when it threatens our comfort, our wealth, or our lifestyle, we start bending it, reinterpreting it, or outright ignoring it. And AI, in reflecting us, simply mirrors that same tendency.
youtube 2025-03-20T22:2… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx3BpEPWuJma49OeHx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzd28Aypl7SwqZ05qJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoR8AARPIwA5CWRXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdPDwriJtaHtin3uZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwvViBg8lHzwb-s2bp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxoeDeMYhPZkj8jJjh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyxB6x18_j9cCIXpAd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLyek9ynOw9FBRstJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdVXbMgtGKJaiS7e14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQCnJnL1vSAC7waYd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"} ]