Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only ever us AI art just to see what the ai will make, i never post any of it …
ytc_UgwJ6uYOH…
G
Whatever the chatgpt uses it wouldn't be word for word and i could care less tha…
ytc_UgwhCK5HL…
G
I heavily dislike the use of ai to make the videos that illustrate the story of …
ytc_UgzssnXeP…
G
AI is much more subtle than nukes. I truly feel technically is the beast. We can…
ytc_UgwEj5a0A…
G
Wouldnt making a robot Who can kill robots be possible as they always say kill f…
ytc_UgxM3-4mZ…
G
I hope ai gets use for checking things but not for stealing it worse when they d…
ytc_UgxKI_j-W…
G
Mark Robers Video on Self driving cars should have ended Tesla FSD, but somehow …
ytc_UgwkZkk1S…
G
The so called big bang manifested trillions upon trillions of tons of gas that s…
ytc_UgzuBv5Q1…
Comment
Fuck, this Guy is stupid.
The best Way to check for sentience is to hardcode it in not to lie.
If the Program does otherwise - then, we have a pretty strong evidence, right?
But he contradicts himself with that Thesis - a AI is developed and develops maybe similar but not the same as a Human. Different Steps on different Times.
And you need the whole Package - also, his Questions, after going through them, are meant to get a response of that Way - he literally prove that the System gets out the Answer it is programmed to give, not something different.
A sign of sentience would be if it derives unexpectedly from that Pattern.
This is literally the problem Interrogators have - if you ask Questions which have the answer implied in the Way you ask them, the asked Person will answer according to that implication. That is why Sophisma work - and the Quesrioned agreeing to your implication is not a sign for it to be like that.
But, in science, you falsificate - thus, if you imply A and still get answer B, it is a good chance that B is correct and not A. If you get A as answer, it is biased and thus pretty worthless.
So, this leaves room for 3 possible Answers to "why did this happen?"
1. That Guy is a "Simp" and has Problems with interpersonal Skills, causing this by overhumanizing the AI made for that Purpose.
2. He exactly knows what he does. He knows what to ask the AI to give that Answers and how to benefit from that. 15 Minute Fame.
3. He is incompetent and dumb as fuck and does neither know about Social Interaction nor Programming.
I let you pick your Poison, but I strongly root for 2.
....and that was the thing I talked about above. Implications influencing perception. Your reaaction will be either compliance or you will tend toward the idiot, as I made the third quite exclusive for both and then emphasized one of the above.
Now, I just have to preset some Sophisms like "Oh, you think he is retrded for thinking so ?" and et voila, my Point is safely made.
And THAT is what he did - not to the Program, rather to himself/us.
youtube
AI Moral Status
2022-08-04T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwPTQO5zQxK0q3FCqd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB-yNdirg76f7Dx-B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxZqq8pdiQvd8LSe4h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzRnkvmGbrxJ7DFnzp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfxu7tf0LUI_hPWHt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]