Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think we'll ever have hard so, also sometimes called the true AI, until we've developed Quantum computing and hit the singularity. I say this as I feel the key to the Turing test is non-linear thinking, which Quantum Computing makes possible. As for the question of understanding, I feel that even the best AI will have an alieness to it, as certain concepts will have no meaning to a computer. For example pain. If you were to smack the housing on a computer and put a pretty significant dent in it, but did not damage any of the internal components; would the computer still be able to recognize that it had sustained damage, even if that is superficial damage? Hard to say. However there is a certain level of alieness between people of different cultures, so whether or not the veil of personhood should fall onto ai or be denied by some uncanny valley of the Mind may be an interesting topic in and of itself. After all humans have denied personhood to outside cultures for centuries, often considering other cultures subhuman primitives, while others are judged great and enlightened. So having an entity that can be called a TRUE other... well that's what science fiction is to explore
youtube 2016-08-09T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgjlGx8FR-EgZHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugi-CMHZ6z1IiHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugi7CjBupUbtHngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggnfU6yPgq2B3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggU9g4favmQ-3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgihvWXlqNA6T3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgjV46XtY-kr1ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggFxLep9Z31AXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggloAGB5WNOMngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghR_DYsydJIdHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]