Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
if ai fails, it feels like it's just u americans being doomed with ur economy...…
ytc_Ugw_-gf1j…
G
Thank you! this interview is so refreshing from a human kind of perspective. In …
ytc_UgxXyeO2Z…
G
We’re heading for a virtual economy , no one really works but checks are issued …
ytc_UgzPtn0v5…
G
I love Bernie, but people who know nothing about AI need to stop talking about w…
ytc_UgzsnTs9U…
G
Obviously it`s not the AIs fault...it`s the idiot human programmer designing it…
ytc_UgySrI_mH…
G
AI in the medical field is on the one hand amazing but I feel like it may make u…
ytc_UgwTM1r8c…
G
so ai is better? grabbing peoples art, stripping them from what they are and tur…
ytr_Ugw-JWGbg…
G
I personally propose we start mass printing images of AI “art” and start burning…
ytc_UgyumTlAH…
Comment
This isn’t necessarily a moral dilemma. We humans often act immorally, and we know it. If I spend €200 on a pair of shoes instead of donating that money to save lives, I am, by the moral standards we claim to uphold, making an immoral choice. We justify it, we rationalize it, but deep down, we know it remains immoral. We just prefer not to look too closely at it.
AI models like ChatGPT are trained on human data and are designed to mimic human behavior. So when faced with a moral dilemma, they will respond just as a human would, by trying to justify immoral actions. If you ask a person the same kind of question, they’ll likely do the same: avoid confronting the hard truth and find ways to sidestep the moral implications.
The real issue is that we only see morality as valuable as long as it doesn’t inconvenience us. The moment it requires genuine sacrifice, when it threatens our comfort, our wealth, or our lifestyle, we start bending it, reinterpreting it, or outright ignoring it. And AI, in reflecting us, simply mirrors that same tendency.
youtube
2025-03-20T22:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx3BpEPWuJma49OeHx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzd28Aypl7SwqZ05qJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyoR8AARPIwA5CWRXd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzdPDwriJtaHtin3uZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvViBg8lHzwb-s2bp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxoeDeMYhPZkj8jJjh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxB6x18_j9cCIXpAd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLyek9ynOw9FBRstJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwdVXbMgtGKJaiS7e14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQCnJnL1vSAC7waYd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}
]