Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who's going to by these AI run companies services if everyone is jobless. Are th…
ytc_Ugxew-Zzl…
G
first i will start with this quote from Machiavelli "You must keep people busy o…
ytc_UgyLhNbXy…
G
this is all about bringing in the mark of the beast. once people take the mark y…
ytc_UgxJgWMVz…
G
This post is five months old, hence information is out of date. Or could be AI c…
ytc_UgwHuKYnG…
G
Please don't call them artists, I prefer the term
Artificial intelligence
is e…
ytr_UgwOqHKzP…
G
I am 66 years old.
I have been in global IT for 40 years.
I watched as the inter…
ytc_UgyGONslZ…
G
Legal departments everywhere are scared of the legal and trade secret implicatio…
rdc_l56mzy8
G
It's kinda funny at 13:00 when ChatGpt seems like losing it's patience when aski…
ytc_UgwDT2zAZ…
Comment
If I made a robot, gave it some kind of "learning AI" so it could learn on its own, it still wouldn't have emotions. If I gave that same robot an upgrade with some kind of "Artificial Emotions", it's still a machine that only has the learning ai and artificial emotions I gave it. Even if it "decided" that humans should be killed, it doesn't change the fact that it's still made of metal, plastic, or whatever else it has. Sure, it can "make" decisions, but does it do it "consciously"? I don't think so. It's just a bunch of parts I put together along with a bunch of lines of code to enable it to "make" decisions and "feel".
Is it possible to give "consciousness" to an object? Just because an object reacts doesn't mean it's conscious. Look at those sensors that get triggered when somebody approaches nearby. The sensors will probably open doors, turn on the lights, or do whatever else it was designed to do, that's how it reacts, but that doesn't make the sensors alive.
I think this treatment of robots as if they're capable of making harmful decisions is something that could be maliciously abused. Just imagine. Let's say I provided a little murderous code in the AI. Not blatantly straightforward murder code, but something that "suggests" to the AI that murdering a specific person is something it should do. Let's say the AI takes the bait, builds the actual murder code for itself, which then leads to it designing to kill that specific someone I wanted to get killed. The robot succeeds and all I have to do is pay some kind of robo-insurance to get away with the crime because "the robot decided to kill somebody on its own".
Anyway I don't see anything wrong treating robots with kindness. It's like with the dolls and toys we own. We love our stuff and when we're kids, sometimes we treat these objects as if they're real and as if they have thoughts and feelings. I think that's pretty normal, but when it comes to serious things such as robots murdering people or doing bad
reddit
AI Moral Status
1524969035.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dy4e3bg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dy4ftoz","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"rdc_dy4phxw","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"indifference"},
{"id":"rdc_dy54eq6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_dy57k0p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}
]