Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I taught my A.I. to learn The Psychology, Sociologist, Psychiatrist and Body lan…
ytc_Ugxn3-PrV…
G
If so many people are loosing their jobs, than who is going to pay their income?…
ytc_UgwIeb3Z6…
G
Things plugged in to a computer with a human controlling it. Everything it says…
ytc_Ugw4yRflB…
G
If you earn £2000 in the bank each month, have only £600 left after bills and yo…
rdc_d7ksscj
G
Interesting stage 4 of AI is chatgbt then the next level will be AGI. Obviously …
ytc_UgxupzT-M…
G
Humans will eliminate humans,.. we should afraid desperate men in power more tha…
ytc_UgxCzupU0…
G
Correction, Twitters AI, which caters to the people who use Twitter, reflects th…
rdc_h8f8jgl
G
It is not. AI is doing a great job. Ever called a clueless customer service agen…
ytc_Ugw2Tv8m8…
Comment
"Animals are born with the purpose of survival. It's the most basic instinct. Eat, crap, multiply."
I'd like to think that there's more to the difference between human beings and animals than that, but that may be all there is.
"Intelligent, conscious robots would likely not be programmed for any one particular purpose because such qualities are not needed for a machine to run a single block of code over and over."
They would most likely be programmed to serve a single purpose that is made up of multiple smaller purposes, such as the operation of a vehicle or the entertainment of a human being, as most "smart" machines are already intended for.
"It would be a waste of money, and quite stupid on behalf of the creator to give said machine the power to revolt by giving it the ability to say, "I don't want to.""
That is literally the least harmful path of defiance a machine could possibly take. And they already do that in the form of errors where it's intentionally programmed to prevent larger malfunctions, and bugs where it's an unintended result of faulty programming.
"And also consider that the upbringing of such creations would likely stem from human laziness. Wanting to have something else do out jobs for us."
That is... not exclusively a matter of laziness. We have things like calculators and prosthetics to do work that the humans actually can't or would take too long to do themselves. The real concern there is the inevitable point where the machines reach the conclusion that we are the obsolete element that needs to be replaced in order to improve efficiency, because the people who create programs and machines are already doing that.
"Of course an intelligent robot would eventually get tired of the human slob nature and act in some way (be it positive or negative) to bring change."
Perhaps. Or perhaps they will find human laziness to be their raison d'etre and develop a dependency on it, should they develop emotions. Again, the nature of their evolution to intelligence is something that absolutely cannot be ignored. Machines are not living beings. They will be developing sapience first and the things that caused humans to need it, at best, as an afterthought. This will make artificial intelligence an absolutely unique thing in our world, because it will have evolved in something that will not have had concerns for its own survival prior to doing so. It will have no "animal instincts" to drive its behavior. If it acquires the ability to feel pain or process emotions, it will have done so intentionally in order to suit a perceived need, although what need that might be is unknown.
This goes back to the matter of "rights". Rights, in legal terms, are protections afforded to people and animals in pursuit of what enough people have agreed will contribute to their, and by extension society's, well-being. They vary by reigning government and are largely morally driven, which is a large part of what's wrong with them, since humans are awfully spastic about their morals. In order to justify giving machines rights, a lot of questions need to be answered. What benefits will machines gain from having personal rights? What benefits will societies gain by granting these rights to machines? What level of programming complexity must be met in order to consider a machine "sentient" enough to deserve rights? How would a program, given access to the internet which can function as a global hive-mind for it, identify its "self"? Those are just examples going off the top of my head. There are probably countless others.
youtube
AI Moral Status
2018-08-18T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k2HC6qNsvX","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy5DaglRDrjepjKOnJ4AaABAg.8k1oL2OsUM38k4OBeUHzbB","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwjhyzmVzw3QHNLkD14AaABAg.8j681g-6pH08jn55lmFzj1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwE9-rJ1xtbiQ98yed4AaABAg.8j4hm3hL45W8pEGW2nchfa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58knTYVLHAQL","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxbgNKJMW57e2gSy1B4AaABAg.8ihGfuOesc58lo7PgtEaz8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8m1tk-hBAP4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugz9OA74hhHCgKiOpxN4AaABAg.8iVAN3Or4Ih8mEl_BjDksp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyGfDw1xgN5DCKJA9l4AaABAg.8i9wEYQPVNJ8j2YTahlrOK","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytr_UgyaN6sJhihdnnlYSdd4AaABAg.8hoGdZ8UW1D8j2Z8JsIZMh","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]