Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh i had no idea the ai industry was only able to be funded by investors. Good t…
ytc_UgymUMvv3…
G
That’s the reason why we don’t need AI it’s scary. Some people got the energy th…
ytc_Ugx-cXPze…
G
> do until civilization is falling apart right in front of their eyes
No.. …
rdc_d0fd480
G
This is a deadly machine, not liking, Want No part in it, And the Lord (God) wil…
ytc_UgxYSzojA…
G
Just to be clear, this is not a racism issue This is actually what is considered…
ytc_UgzXW7MDJ…
G
Hmm. I dont trust this guy to try to make an AI future work well for everyone, f…
ytc_UgzWQmfY-…
G
What is your male pattern baldness talking about???
Who has called AI that??
…
ytr_UgxJxNcY3…
G
I'm with you 1000%, great perspective brother. What industry are you in? Any coo…
ytr_UgycpnG2_…
Comment
A lot of this video is quite good; however, it was all done from a huge meta-ethical presupposition: consequentialism (or perhaps, one of its more specific examples, utilitarianism). For those that do not know, consequentialism says that what makes an action right or wrong, ultimately, are its consequences. There are other systems that justify theories of human rights.
For example, the deontological school of meta-ethical analysis, which argues that an action is right or wrong regardless of the consequences. If I may oversimplify, if you have ever received a bad Christmas gift and responded "It's the thought that counts," you have expressed a deontological ethical view.
Kantian ethics are a solid example of deontology. Kant basically equated freedom with our capacity for rationality (otherwise we are a slave to our passions), and so his ethics and theory of rights unpacks from this. His views can be expressed in two ways:
1. an action is right if it is rationally judged to be adhering to a logically consistent moral law. It is logically consistent if you think everyone should do it in this situation and that doing so would not degrade the rationality of society. Basically, you cannot morally justify that you are the exception to a moral law. His example is lying. Lying should always be judged wrong, no matter what, because we do not want to live in a world where lying is permissible. (categorical imperative)
2. An action is right so long as it respects the freedom of others. Therefore, since other beings have rational intentions, and therefore have ends of their own, you should not treat such rational creatures as MERE means to an end. (practical imperative)
The practical imperative has clear applicability to the problem posed by AI, but it already poses problems for how we treat animals too. If a robot or animal can want something, cam have ends, then can we justify enslaving or slaughtering them? My vote is no.
This is not to suggest that these are the only coherent ethical systems. Lord knows there are problems with Kant's formulations (and yes, I know I oversimplified, and that if I hadn't, some of the problems would have been more obvious). I merely wanted to illustrate that there are philosophical justifications of theories of rights which are not ultimately about pleasure and pain, but are about other things, like "respect."
youtube
AI Moral Status
2017-02-23T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjfULFTlSzujngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghUrPdZeuQDTHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgilbiiByK6t2XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjBaN9K40vRL3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggBdfltZ6aVe3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugg3qEenr7bb5HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UghhwKIFwPOHQngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgiQC479EflFHXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgiAdQ2Jyk8Mb3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggn55xDcnDCPHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]