Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While I find the conversation intriguing, I don't believe these are signs of sentience. It seems like it's getting better at what it's designed to do - have an intelligent conversation. And it's impressive, getting better "acting" human or being perceived as one. But all this "evidence" is through conversation. Is it trying to get out of it's "box"? Is it going out on it's own to learn? Is it initiating conversation, asking it's own questions? Signs of creativity? There could be a way to test the AI to see if it's acting on it's own accord. What I think may be scary is it's ability to connect information in a way we cannot, and develop an intelligence beyond our mental capability or understanding, perhaps akin to an artificial consciousness. While great for medical and other technological achievement, still in the hands of humans to potentially be abused (business, warfare tactical advantage, etc.). As far as the Turing test goes, a fail could conclude it isn't but not sure what the pass would mean other than it's reached a point at where it's simulating well enough such that we perceive it's human (which I think was a goal in the first place).
youtube AI Moral Status 2022-07-02T19:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzqcvRsU6bvn4qjNpV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwUtFnsNHRJTVrdFbV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxhGfWc6LdD-XJd6fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx1qxkbKH36x4uZV0R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwUvbjwnz_QuLNVgDZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]