Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone that got my first ever 100% grade in my life on an assignment, follow…
ytc_UgwN9Ap4I…
G
Maybe this about monopoly of those companies ? About not giving access to to the…
ytc_UgzotHbfS…
G
I for one have no issue having my pride in jeopardy to not starve due to univers…
ytc_Ugy4D474N…
G
First of, i like this idea of acting against art theft, using pieces of art agai…
ytc_UgyVOgNvz…
G
If you did not watch the video here ill tall you about it.Pretty much what they …
ytr_UgyMW3bYw…
G
data entry is probably the only thing ai can maybe do and that's still a big may…
ytr_UgyPtztW3…
G
Adding onto this:
"It's inevitable!" - said about crypto, did not happen. said a…
ytc_UgyxtD0Gm…
G
I'm just searching for yanderes AI and I tell I call the police if they don't le…
ytc_UgypnR2-N…
Comment
Here's my best attempt to try and point out the errors in Alex's arguments (also it's also midnight and I need to go to bed. Oh well):
1) Oversimplification
You can reduce any system down to a model, which simplifies the process of understanding the system as a whole. As the saying goes, all models are incorrect (or incomplete), but some are useful. In this case the ethical dilemma is exactly that: a simplified model. You have clear physical danger (drowning or malaria), and a one-stop shop to fix this danger (physically intervening or donating money).
If the world is really that simple (which it isn't), then the conclusion is unavoidable. However, there are other factors at play such as the sheer number of people in peril, the impossibility for one person to intervene/help with their own time/effort/resources, and the fact that death and suffering are integral to our world. Besides, if a people had a moral obligation to _always_ give their resources to people in greater need, then that person would end up and need themselves, and require other people's resources to survive. One could argue that people have a moral obligation to take care of themselves within reason so that other people don't have to, unless they're unable.
Anyway, the problem with the AI's moral guidelines is that they're oversimplified and lack groundedness in the real world. ChatGPT is full of knowledge, but not wisdom. Knowledge is knowing that a tomato is a fruit. Wisdom is knowing not to put it in a fruit salad. Having a moral philosophy that doesn't go beyond a simple model can sometimes lead to useful insights, but ultimately it has to be grounded in the real world and has to hold up to years, decades, and millenia of being put into action.
As my body is now tired from being up late, I have a moral obligation to not place a burden on others by being too tired to function and carry out my obligations. It is morally imperative that I go to bed. Since it is in my best and your best interest that all humans sleep well to care for themselves and those around them, It is my moral obligation that I bid you sleep well too.
Sweet dreams.
youtube
2025-04-23T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy0EXAnjur6K7ASaR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzialDwqV841UynUMV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSB6xoHPFShQ8zs0B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFlnr365YVAqpOsvl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_vQchG58cr_QIyzN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgweoUuePN514FM8MMp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_TpUURrW1iAMydJN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw10UAZMWSIllzyAr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNif3VHTsPtBKC3sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTKl9sl7iYbyhGvaB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]