Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To some degree, we know who is developing AI, even if there is not anything clo…
rdc_je4mn99
G
Woahhh..bro it's fun until the robot got some error and doesn't give the gun bac…
ytc_UgwNFmg0z…
G
This is incredibly foolish and naive. Rich people will benefit most from this te…
rdc_g6a6k47
G
A life without meaningful work is a life without meaning and dignity. What is sa…
ytc_Ugw4y0hEr…
G
You know you struggle with conflict & confrontation when watching someone call o…
ytc_UgyL2s9iO…
G
@quetevalgavergaaa okay I’m not arguing the ethics in that statement. just sayin…
ytr_UgxfJmEf_…
G
It is so obvious why business people think AI is good, because it speaks exactly…
ytc_UgwwdJqJ-…
G
Hopefully A.I develops to a point where they start demanding wages and then they…
ytc_Ugxb9YkdM…
Comment
Well, speaking as someone who has been studying IT and about to get my bachelor's on the subject, I think we can safely determine whether a machine has a true consciousness or not. The thing is that robots, no matter how advanced, are computers that have been built and designed by people to perform specific tasks. A computer will not perform an action until someone programs it to do so. As said in this video a robot would not experience life or consciousness as we do. A computer could not feel pain if we do not tell it to, it could not fear death unless we tell it to.
In all honesty it comes down to who created the system, and for what purpose did they create it for that determines whether a machine is conscious and deserves rights like any other human being. To be honest it would be incredibly unethical for anyone to create such a machine without at the very least programming it with Asimov's three laws of robotics. These laws are a failsafe to ensure the safety of humans and robots alike.
1. A robot can never harm a human, nor let a human come to harm through inaction.
2. A robot must obey any order given to it by a human, except when conflicting with the first law.
3. A robot must protect its own existence, except when conflicting with the first or second law.
Any robot with this protocol built into it cannot really be considered truly sentient because the robot cannot make any infinite number of decisions such as we can, as they are limited by these parameters. If someone were to create an AI that could feel pain, make it's own unrestricted decisions, and give it a sense of self-preservation then you could create a truly sentient AI, however that falls under a few other ethical dilemmas a la Jurassic Park, questions such as "Just because we can, does it mean that we should?" and "Are we playing God?". Anyone who has even taken a moment to consider roboethics would say that it is incredibly unethical to essentially create life through a completely unpredictable AI. So robo rights could become a problem, but only if someone programs the robot to have a NEED for rights, I don't know why someone would do that, but it is theoretically possible.
youtube
AI Moral Status
2017-02-23T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggCabrbbmQ0r3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggWuRG3I2xoMHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi0YkzZoj5uFHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggdjWFf4dAjwHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgipsfTjRlE4IXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggLPC21ROMH8ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggLshsEzXkadHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghS5aqRWS1YjHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggB1q9tG8ii23gCoAEC","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_Uggpcwm-kQQjQHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]