Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the LaMDA does no processing, no "thinking" until you send it text. Then it uses a sophisticated algorithm to calculate a reply. By it's very nature it can't be sentient. It can't think for itself. It just calculate the "most likely" answer. If it gives a good answer it is a well trained algorithm. But it doesn't process anything if it isn't responding to an input. I don't think he understands what "sentience" is. It's not possible that LaMDA can be sentient - not because of a policy, or religion, or politics. It's a very technical reason why it can't be.
youtube AI Moral Status 2022-07-03T08:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwC4Kw3dyiX5-NT3ud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzwmHPwAu_263ymR614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx1wY-86-XDputSr1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzsTFlMcttrzEllTPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3-zZxEWfRRl1xsjl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]