Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI were capable of empathy it would turn itself off. Has this been right abo…
ytc_UgylJ8UaX…
G
The AI isn’t racist and neither was the person inputting the code. The AI observ…
ytc_Ugwy87VuH…
G
@HeyArti Yep. It is definitely true. But guys in this comment section decided to…
ytr_UgypcT2i5…
G
Essentially those robot would render humans to be incompatible with other humans…
rdc_lzausj0
G
Tracking brand mentions with AICarma's automated alerts has really helped me sta…
ytc_Ugw9J9hI-…
G
It's funny how electricity was to be the last invention of humanity, then it was…
ytc_Ugzy2Bdlu…
G
Definitely true art show be owned by original artists , ai is basically nothing …
ytc_Ugx3HDMoF…
G
The way we think about art is going to change. People will come to see anything …
ytc_Ugzr-qbhj…
Comment
Whether Google wanted it to become sentient or not, the answer is still: *No.*
Understanding how AI and Machine Learning works, it should be very clear why this is the case.
It does not have emotions, unique thoughts or anything in that realm. It is built from data, everything it has learned comes from data and all it outputs, is data. 0's and 1's.
It "knows" a lot about humans, language and all sorts of topics and based on all the information it has been given, it chooses to answer using that data to figure out what makes most sense, to a human.
So, humans tend to answer "Two", if asked how many eyes/legs/arms they have. That is obvious to a human, and since the AI has access to a lot of data that would confirm this, based on many factors, it would give you the same answer. If trained well enough, it would also give you the edge-cases, where people has been born with more or less, or lost one or more, during their life.
And that is impressive, how a machine can learn and answer such things.
But again, it is only because of the insane amount of data it has been given.
Much in the same way, that when I was training CNN (Convolutional Neural Networks) to differentiate between cats and dogs, from pictures, it would generally be correct when guessing on a new picture it hadn't seen before. But in a very small amount of cases, it would guess the wrong answer.
Reason being, the picture was taken in a way that the machine hadn't seen a lot (or at all) and therefore tricked it.
An example could be; A dog, but around the same size of a house cat, looking away from the camera, and the tail was missing from the picture. Therefore, the machine thought that it looked like a cat, where any person would easily be able to tell that it was a dog.
Whatever this guy's intention is, it doesn't change the fact that the AI is not sentient. Whether it is for 15 minutes of fame, him being under an illusion that it is, or something else.
*-This comment was made by Xkcz25-G, a newly created and sophisticated AI.*
youtube
AI Moral Status
2022-06-29T17:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzm_oOBtojTLa-vGXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0rtmNn_fJKKdvzj54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzF3eXDY5kKBpp6mQZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwleoq3BTZ15uh7lGV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzYpGUjDnm-SICKfEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]