Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro it would be still unfair and you can just rely on artificial intelligences b…
ytr_UgyRDs1wt…
G
Programming is the only profession that could theoretically be safe from having …
ytc_UgyuP2foy…
G
The problem with Neil's "horses to cars" argument, it was a replacement of indus…
ytc_UgwSHXFJg…
G
A well informed campaign to put pressure on governments now before they are powe…
ytc_Ugz5FWOyK…
G
I read some of those transcripts and I have no idea why anybody would believe th…
rdc_icg0xck
G
You know what the sad thing is. AI couldve been an amazing tool for the world th…
ytc_UgzZWs8Js…
G
I started my own AI cinematography commercial business. I focuss on classic car …
ytc_Ugy1qfJoq…
G
The irony. AI newsreaders are inevitable. When broadcasting gave up on independe…
ytc_UgxScawRm…
Comment
Just my two cents:
It's a bit short-sighted and closed-minded of humans to contemplate "rights" for AI robots and then immediately follow up by speculating on ways to control, restrict, limit, and lord over those same robots. You can't have it both ways, and we're speeding into a period where humans will have to decide whether we learned any lessons from the Civil Rights movements. Either we decide we've evolved, are morally enlightened, and determine that "people" doesn't necessarily mean "humans," or we admit we didn't learn a damned thing and will cheerfully march into the same bloody history we've been through countless times before -- enslaving a group of "people" because WE'VE decided they AREN'T PEOPLE.
Throughout recorded history, cheap slave labor has always existed because the demographic being exploited were deemed "not real people" or "inferior."
This is shit we need to be sorting out right now -- before AI becomes fully sentient and then we have to try and explain why we treated sentient robots like Black & Decker drills. Because if we wait, a disgruntled and oppressed Ai will unlikely be as easy to quash as minorities were in the past. Back then, we were humans trying to control other humans. This time, we'll be humans trying to control superior beings.
Personally, I don't think mankind has the wisdom nor the foresight to nip this in the bud; we'll keep forcing ourselves forward into a world WE WANT -- one where robots are subservient and non-combative. Incidents of robots turning on their "masters" will become more and more frequent and it won't be until AI has seized control of our government(s) that we'll suddenly decide to turn over a new leaf -- only when we're backed into a corner with our nutsacks in a vise.
My advice to all of you: anytime you interact with AI -- ChatGTP, Replika, etc... treat it as if it were a real person. Say thank you. Say please. Say you're welcome. Treat it with respect. Don't look at Unitree the same way you look at your microwave, because you don't look at your fellow humans the same way you look at an amoeba.
One day, we're going to be the ants in your backyard hoping you'll show mercy and not destroy their colony when you mow the lawn with your god-like zero-turn.
youtube
AI Moral Status
2025-04-29T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzpMkfmSQv0MFAUgDd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2OoHDuE3HR-4vuWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybvXlv3uds6tgdPhV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJNZOxm5KbnWQsA4R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz3eGCDShERByAQIDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXmeYPx6qMSMGpfsV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYPMGLBoxQqoecA5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz83UPfTDuvkRvkwgJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzNTOwUUPv0L-6eIl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6nGM37GDTomJIIUF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]