Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think they need a more rigorous standardized test than the Turing Test to evaluate sentience. The Turing Test evaluates the ability of a computer to imitate a human so well that a human interrogator cannot tell the difference between a human and a computer. In that situation, you depend on subjective human opinion, not objective methods of evaluating sentience. The problematic part is psychology and philosophy are not something we can do holistically like physics or math because we haven't even understood the brain entirely. Not that we understand physics or math fully either, but we do understand them more in comparison to psychology or philosophy, philosophy being derivative of introspection. When we do, we might stand a chance at standardizing a test, but until then, we only have our subjectivity which is easily fooled by ML algorithms in advertisements countless(except by the algo) times a day.
youtube AI Moral Status 2022-07-06T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyG4uZU2LOhtzDQYGR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugztjvo6ZME5WS_Y2IJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwml2JdXPLGcDOciqt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx4cNRNExDV0njv4Nl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzXO5WDKs9fheGaRBZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"} ]