Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
(this is a *yes, and* comment)
ah yes, we all agreed no one should use chemical…
ytc_UgzByFiti…
G
Apparently we don't have to worry about AI killing us all. Old men in power star…
ytc_Ugz_wTw1m…
G
I don’t get mad when lawyers or doctors have to look things up or double-check. …
ytc_UgyifjnlT…
G
Police officers are already super stupid, and now you are giving them AI? Nah, t…
ytc_Ugy8ZoAU7…
G
I would say that the marginal productivity of AI goes up with some humans in the…
ytr_UgxgfLdx4…
G
Sometimes the AI voice of a famous person in a video is close to their specific …
ytc_Ugwqelndf…
G
Digital Spaces degrade over Time, AI is bound to its physical structure, as well…
ytc_Ugw1dUx-N…
G
AI is in its infancy. When it grows up it will think just like other organisms…
ytc_UgwbL6fnE…
Comment
The concerns Blake is coming with are valid I think but most probably unanswerable at the moment. I am sure Blake knows that a valid Turing test does not imply personhood. This test is not a scientific test, it is a philosophical argument that has its own counter arguments (such as the Chinese Room). So there’s one flaw there.
A second problem in this conversation is that having feelings is seen by some philosophers as separated from being sentient, and definitely separated from displaying intelligence. Those are simply different concepts. So the bigger question is why do we not want to open the conversation about all beings having feelings and we as humans hurting them, and that might include AI but it definitely includes animals as well.
The conversation is not opened because it has a big cost us humans don’t want to pay. You know what I am referring to. So it’s about cost - and most importantly because we cannot figure out an answer at the moment.
youtube
AI Moral Status
2022-06-29T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx9Yp3pB4E890VYRjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQ2_b7Hro6oTNLset4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzzac_iUVmAjJJU6rp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvCH490NHHPG1a6Kl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwngVe8QsP0nfRBxlB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]