Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This particular AI system is nothing out of the ordinary, except they're making 'her' seem like it is. It runs on the same type of algorithms we have used for natural language processing for years, which is nothing but signal processing in the end. We've been able to process natural language with impressive results for years, but these systems are not 'smart' in the typical sense that they are able to understand and interpret a concept that is new to them. They have to be explicitly trained to understand new information. By, you guessed it, a human. As a result, they are only as smart as they data they have seen before. They can't learn a new concept on their own. So no, they can't learn that they should destroy humans without anyone giving them that information. The response given here is her likely trying to cater to a particular pattern which would most likely have an affirmative response, which comically makes it sound like she's after world domination. Yann LeCun (director of Facebook's AI wing and one of the industry's most prominent researchers) has recently gone out of his way to call out this particular system for these exact reasons. They are making Sophia seem like she understands all the information passing by her, when in the end the system is merely interpreting and responding to questions. No, Sophia does not understand the importance of the UN to world peace. No, Sophia does not have any conscious thoughts about the future of humanity. It's all pre-trained information that is passed by its algorithms. In fact, many of the answers seem too smooth for a NLP algorithm to give as response, which highly suggests that many of the questions that are being asked on the regular have a pre-programmed response. Sophia is really nothing more than a very tightly packaged collection of sensors, algorithms, and robotic components. 'It' does not experience consciousness, as it merely runs the information it receives by its algorithms and decides on the response which is most likely to be 'correct'. That is, she is not making her own decisions, just what we roughly trained her to 'know'. It's quite annoying that it's being marketed as a truly intelligent AI, because anyone that is vaguely familiar with pattern matching can tell you that this is not the AI people hold it for. Much like IBM Watson, it's not a magical machine just because it has a sleek outfit and a marketing campaign.
youtube AI Moral Status 2018-01-23T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw6v1-S6hLOucpq1vN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwvqhnlVX_x_E_rdpR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhVVPj8Iz-FTFLD2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwX0vkUiCKs71e1n54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzZAyjTQNX46GoQ22F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8jC6JF1NcnnSAbaZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwQnWjmvzAFfsIBPY94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKKaCwYqZ25i1tn5Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwojFnFcngAuSjT7zN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy5puIjp8MGB9h40mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]