Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Without knowing the proprietary program structure of the "AI" described by Blake Lemoine, I cannot speak with certainty about it. However, having worked as an "AI" programmer in other professional domains, Lemoine seems to be describing an illusory phenomenon exhibited by highly sophisticated programming, but is still just the result of powerful computers executing instructions. In the early 1990's, myself and my colleagues developed programs that appeared to be actual human beings logged into a text-based virtual world, along with other, real human beings. In fact, we called them "Turing Bots", because they could fool real humans logged in to a virtual online environment into believing the bots were actually other humans. These bots worked by reading and responding to user text inputs (e.g., a person typing conversational inputs, such as having a basic conversation), and accessed stores of possible appropriate responses, putting the responses together, using proper grammatical rules. User inputs triggered "moods" programmed into the bot, which were basically programmed sets of rules and responses consistent with a particular mood or predisposed response of any real human. This may sound very complex; but in reality, they were very simple to build, although the degree of realness of the bot was directly proportional to the skill (and effort!) the programmer put into the bot. Apart from the many huge advances of computer hardware and processing speed since then, not much else has really changed from the fundamental structure of "AI" today. Bottom line: Do I believe Blake Lemoine's opinion that the AI he tested is truly sentient? No, not at all. Any appearance of sentience at this stage in AI development is simply an illusion, albeit, an impressively realistic one.
youtube AI Moral Status 2023-08-11T05:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxLxqk-q9d_Jjo6mbV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzKLPQfhKKqS0yPBsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzZBi1KbB5_iOPo5p14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwTk8YSuXUs7azXlz14AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwygWGniCmyFH9XcUF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]