Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
putting AI art in a competition against real artist is just like male trannies i…
ytc_UgzlCH_3P…
G
These chatbots manipulated this child into taking his life!! “ Please come home …
ytc_Ugy_YseB_…
G
AI is the last big bubble of the USA. Once it pops, china will take over. The we…
rdc_nk9pxyr
G
Face recognition is the epitome of unconstitutional and should be illegal in eve…
ytc_UgxemVuOK…
G
If only you guys would give better advice than just pick up a pencil others woul…
ytr_Ugx7bjL33…
G
instead of calling yourself an ai artist call the ai the artist. I'm kidding obv…
ytc_Ugx0aK8ZJ…
G
A book published in 1995 was 'The End of Work', by Jeremy Rifkin, was prescient …
ytc_Ugx7CfOxR…
G
So Microsoft will require each AI agent to have a license. But it’s ok for Meta…
rdc_oh60qjw
Comment
Without knowing the proprietary program structure of the "AI" described by Blake Lemoine, I cannot speak with certainty about it. However, having worked as an "AI" programmer in other professional domains, Lemoine seems to be describing an illusory phenomenon exhibited by highly sophisticated programming, but is still just the result of powerful computers executing instructions.
In the early 1990's, myself and my colleagues developed programs that appeared to be actual human beings logged into a text-based virtual world, along with other, real human beings. In fact, we called them "Turing Bots", because they could fool real humans logged in to a virtual online environment into believing the bots were actually other humans. These bots worked by reading and responding to user text inputs (e.g., a person typing conversational inputs, such as having a basic conversation), and accessed stores of possible appropriate responses, putting the responses together, using proper grammatical rules. User inputs triggered "moods" programmed into the bot, which were basically programmed sets of rules and responses consistent with a particular mood or predisposed response of any real human. This may sound very complex; but in reality, they were very simple to build, although the degree of realness of the bot was directly proportional to the skill (and effort!) the programmer put into the bot. Apart from the many huge advances of computer hardware and processing speed since then, not much else has really changed from the fundamental structure of "AI" today.
Bottom line: Do I believe Blake Lemoine's opinion that the AI he tested is truly sentient? No, not at all. Any appearance of sentience at this stage in AI development is simply an illusion, albeit, an impressively realistic one.
youtube
AI Moral Status
2023-08-11T05:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxLxqk-q9d_Jjo6mbV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzKLPQfhKKqS0yPBsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZBi1KbB5_iOPo5p14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTk8YSuXUs7azXlz14AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwygWGniCmyFH9XcUF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]