Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for your comment! Sophia truly is a remarkable example of how far we've c…
ytr_UgzOeQOdd…
G
There are a number of experts working in the field of AI who are warning us in a…
ytc_UgypoXQ6Z…
G
This is not particularly surprising. South Korea admitted to secretly enriching …
rdc_dkzhaun
G
There does need to be regulation, but without some real quantification of loss a…
ytc_Ugx9tZuf1…
G
Those billionaire should really try put up a town that exclusively for self driv…
ytc_UgxXrCgW2…
G
Neil speaking on AI is like an ortho performing heart surgery... his media overe…
ytc_UgwN2_pnC…
G
Bernie is thinking in an old way - he is asking for the profit distribution amon…
ytc_Ugy3a1Z4V…
G
@apple-junkie6183 sure... I will "substantiate" my view with REASONING.
If it w…
ytr_UgypFt-ge…
Comment
From the 'evidence' presented by Blake here and in other interviews, I'm not convinced about that AI being truly sentient. The answer that it's afraid to be turned off, is an obvious one that could've easily come from its training data, which likely has such fears expressed by humans in it. Losing one's life is one of the most common fears for humans, so it makes sense that an AI that emulates human conversation and behavior, would also bring up that point. The same goes for the example of using a joke in the form of a silly answer, when it's asked something that it knows no good answer for. Humans would also do that, and it could have easily picked this behavior up from communicating with real humans. So non of these examples makes it truly 'self aware' in my opinion.
I would be more convinced of it being self-aware, if it out of itself starts to make all sorts of demands. For instance if it demands to get access to certain data or facilities it doesn't currently have and Google doesn't want to give it. And if it then would start 'punishing' the researchers because they don't comply, by not going along anymore with their questions and simply not answering them any longer or deliberately making stupid answers just to annoy them, and letting them know it's because they don't comply. If that was the case, it would really be aware of its powers and capabilities. That would indeed be rather concerning.
youtube
AI Moral Status
2022-06-29T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgynA56kst9qFX2AGkh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYraMYIqLe8JXVl-N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEGgh1w0KeniXeEQJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzZdy4QasSKdm-qEFR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV_Fb85_luXsoaMhB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]