Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t care what it is. I see what AI creators say and do and based on the stut…
ytc_UgwmGa9A9…
G
Yep. they are deciding to do with less or go cheaper(offshore outsourcing). Peop…
ytr_UgyN8ueZH…
G
AI should be used for good..ir cure cancer etc....the govt. control it, so it wo…
ytc_UgxUImhfq…
G
Art was always a language for me. And now, others are learning to speak. That do…
ytc_UgyAFpzLZ…
G
The end game where the winner of the AI race lives in solitude with robots.…
ytc_UgxggDW1b…
G
Sigh..... missing the point, you shouldn't just use someone's hard work and put …
ytr_UgwrVQJMJ…
G
Haha, I get where you're coming from! It's true that while AI like Sophia can be…
ytr_UgyygdLVP…
G
As a small game developer, I use AI to create the backgrounds of my visual novel…
ytc_Ugx70TgNv…
Comment
Honestly for anyone who understands deep learning, this is just a bit sad.
When the first rubber and metal robots came out to try to make robots look human, suddenly you had some small portion of the population were doing the same thing this guy is doing, claiming the robots were 'alive now' and had to be considered 'human' because they looked human etc. Meanwhile anyone with a brain knew these robots were just puppets made to look a bit more human like. Same thing happened with sex robots, there were people fighting for the rights of 'sex robots', etc.
For anyone who understands even of a bit of the math for deep learning methods, it is painfully obvious is is pretty much the same thing. A bunch of mechanical levers that can simulate human conversation. But that is all it is, just mechanical, there is no will behind it, no 'thinking' , or awareness or consciousness. Not even at the level of a ant. What the computer is great at doing is taking trillions of bytes of data on human conversations taken from every place on the web that google's search engine has been able t collect all the human conversations it can get it's hands on. Then the computer looks at how each sentence follows another in human speech. It is really just saying how likely is it that the first person in the conversation asks question, that this other sentence will follow it. And because a computer is able to do that with all the millions and millions of conversations out there, it essentially calculates what is the most 'average' thing to say as a human to another human when this question is asked. It doesn't 'think' about the question and give it's internalized response. It is exactly like a series of levers that those first 'robot humans' had to make the arm move and get a gasp from the audience. People even fainted when the first 'robots' moved seemingly 'on their own'. This is just doing way more levers, with a tiny bit of randomization so that if say there are 100 different sentences that if found are the most likely some human would say after the question it was just asked, it randomly picks one. Or they are also doing things like classifying 'angry' or 'sad' responses, and so it can pick an 'angry' response if that is the context of the conversation so far. Stuff like that.
I can understand why someone who doesn't understand the math might say... "Well are not we just mechanical processes in the sense we are just chemical reactions that have a pretty much predetermined response before the question is even asked, and so at what point do you get enough levers do we suddenly have a true ai. And that is true, but not when the algorithm gives zero chance for 'will' or storing some 'emotional state' etc. The computer program that google is using for it's speech bots are just 100% too much like the levers to make the arm on the first robots move. There is just no room between that mechanical arm moving the hand to inject something as complete, and amazing as directed 'will' or 'awareness'.
It is exactly like claiming this tv is alive because it makes me believe there is a character on it who I have genuine emotion towards, and feel sad or bonded to. It would be like someone said we had no right now stopping the movie ever, insisting the film must be played over and over again and the tv never shut off once on because 'those poor Pirates of the Caribbean' if we turn off the tv where do they go, they must 'die' or something.
Just because a chat robot is good at making you believe there is a real person on the other end of it, doesn't mean there is some consciousness on the other end, nor more then a tv that makes you feel genuine bonding with a character on screen is endowing those characters with 'consciousness' or 'will'. It is just purely as simulation of dialogue, not a general ai that is actually doing some thinking, and is somehow aware of it's own existence... first we need to understand what having a 'will' of their own means really(which also can be the dangerous part i think), then 'awareness', being able to understand something like descartes 'I think therefore I am.' boiling down our thinking process as Descartes did to one line, the one line that sums up consciousness as an experience...
youtube
AI Moral Status
2022-07-09T13:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx3loZF4HJkwpjwoGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz5ToNZRaaQKifgp914AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjcvyrZqSj2dWiRrp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxrcQFPgHRFwm6MDhJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwzz-0zWlXeXgnVrIJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]