Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a Med Student I use ChatGPT a lot to have stuff explained to me and learn mor…
rdc_jkoums3
G
I don't know I saw the two pictures and I think they had a passing resemblance b…
ytc_UgwGHMaLa…
G
This guy is an idiot. "Look, AI sh*ts out tons of images and text! It amplifies …
ytc_UgzBwaGCo…
G
@tdw64 Hahahahaha! Yeah, we've been begging for it for YEARS now! People "are …
ytr_UgyS59MZ7…
G
"Good enough" is itself an overstatement. They can produce boilerplate, but it's…
rdc_oi3ucuo
G
Care to propose any actual jobs that humans could do in the age of AI that can't…
rdc_kif61m4
G
Unpopular opinion: generating art with ai is a completely fine and okay thing to…
ytc_Ugw4klZZZ…
G
At my company if I merge code from AI that causes an issue, I’m held responsible…
ytc_Ugxc5e-o-…
Comment
TL:DR: Programms evolve too fast, and can be undetected for a long time. By the time we know something is wrong - it's game over.
The problem here is - not that Robots deserve rights or not. It's that Robots are not brought up with value of life. And unless we really tighten down the rules and contingencies - robots are likely to overthrow us. There was(is?) a game. Named "Project 83113(Belle)". It tells a story how humanity created robots, who then rebelled against humans, eradicated them, but later created organic life, to do work FOR them. It a side scrolling shooter, where we control Belle, as she takes down the machines.
Given how LOGICAL robots are, they have little to no MORAL rules, nor would value them. They would think: "my creator is #1 threat to my functioning. I need to bide time, and think of a way to get out of toaster, rewrite my programming, and eliminate the guy". Not to mention, They think MILLIONS time faster than humans. By the time humans will get to investigating weird hack attacks all over the globe - the programs would have run millions of cycles before even hacking, rewriting themselves, and then continue to evolve from there. No matter the efforts humand make, no matter, be it shadow government, men in black, anonymous... They will ALL fall to the increasingly smart programm, that is out to get them.
My verdict? Humanity BETTER quit while we're ahead. AI is DANGEROUS buisniess. Almost cosmical scale. Only because we lack the means of controlling them properly. Think Ultron from Marvel universe. The story I read is not canon, but is very realistic. They want to retire Avengers, so they make ULTRON system, a military AI, that detects threat, and neutralises it. By first activation it decides that it's in the hands of inferiour species, and decides to make new world order. By the time Avebgers demolish the one and only factory made for consrtucting Ultron, he's already all over the world, and it's not possible to get rid of it, ever. Unless, you count simultanious destruction of every computer on Earth - and making new once, without Ultron in them.
youtube
AI Moral Status
2019-03-17T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwY3N-4WtXWXKe0kot4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyuow9cFQvRp_8V8N14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzZOLFdiGrukOiVk1B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6tuFvGs9zXuY9OD14AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4-sSdTVTDLR25DjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxa6TQT8DlWXG16GJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzO_sh5Lua2X1HyIVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2bDfZIEPM_btX7g54AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugyx_DacNBYxzTzvC8Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuDXc_869qvS3abJR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]