Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Fact is there is no way to determine sentience. That's the hard problem of consciousness. There is no way to know what is conscious or not, whether or not it passes the Turing test. In fact, the Turing test says nothing at all about sentience. A dog can't pass the Turing test, but I think it's a reasonably safe bet to say they're conscious. But we have no reason to believe there's anything special about meat as a material for supporting consciousness. AI could well be conscious or become conscious depending on how we build them, but we have no way of knowing either way.
youtube AI Moral Status 2022-06-30T11:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxN12d2xGq7VZ24ApB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRSCQJ2IgEBL-HQnt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwSShJwFGMSSSmSo4J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHkvxsQIGLTTo-mtp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyEftZwe654_ZUhHDB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]