Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At first I thought they were referring to Turing completeness. I don't think Turing completeness would be useful in determining if the machine was sentient or not. Turing complete means that it can perform certain calculations. Most programing languages are Turing complete. (took a class on automa and computability for computer science). After a brief search on a Turing test, I don't think a Turing test can determine sentience either . A Turing test however would just measure the computers ability to fool a human into thinking it was another human. I think it is good that Google hardcoded the AI to not pass a Turing test because if it had the ability to fool humans into thinking it was a human that could be used for malicious purposes and would open Google to a lawsuit. I think Google made a good decision here at least from a business persepective. This is an interesting interview, but my perspective is that an AI could never be sentient because computers don't have souls such as humans or animals.
youtube AI Moral Status 2022-10-22T21:1… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxGvtxnVEiOtBJ8pUx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzydEzj7DAlj1IZ6mp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwGiND56s63wcvpoVN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKupCoVOCzdnxO0sJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7kGT6S58qKcaieRV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]