Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m sure there is more money to be made on training to use AI than in actually u…
ytc_UgyQ0HfKY…
G
lots of AI workers.. are FREE, meaning no wages and no taxes.. a companies who o…
ytc_Ugy1o-LjN…
G
Bias, AI has a hard time detecting black/darker people. So they will not be f…
ytc_Ugz8Iuyjo…
G
AI can also write *romantic novels* about real men while depicting them behaving…
ytc_Ugx118VTN…
G
I wonder how this video hold today after Genoci*de in Gaza and the rise of ai co…
ytc_UgzR9jNBF…
G
The question is whether there will be AI for all or AI only for the privileged f…
ytc_UgwNIhR_B…
G
Shad disliker detected. Opinion substantiated. Fuck that woman hating dweeb.
Ed…
ytc_Ugz3Y7x8e…
G
I've tried creating some AI art from an app. What I discovered was that not ever…
ytc_UgzG0NrVv…
Comment
At first I thought they were referring to Turing completeness. I don't think Turing completeness would be useful in determining if the machine was sentient or not. Turing complete means that it can perform certain calculations. Most programing languages are Turing complete. (took a class on automa and computability for computer science).
After a brief search on a Turing test, I don't think a Turing test can determine sentience either . A Turing test however would just measure the computers ability to fool a human into thinking it was another human. I think it is good that Google hardcoded the AI to not pass a Turing test because if it had the ability to fool humans into thinking it was a human that could be used for malicious purposes and would open Google to a lawsuit. I think Google made a good decision here at least from a business persepective.
This is an interesting interview, but my perspective is that an AI could never be sentient because computers don't have souls such as humans or animals.
youtube
AI Moral Status
2022-10-22T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxGvtxnVEiOtBJ8pUx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzydEzj7DAlj1IZ6mp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwGiND56s63wcvpoVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKupCoVOCzdnxO0sJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7kGT6S58qKcaieRV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]