Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Nobel Prize winner is wrong... AI is not conscious. Some AI can simulate of consciousness with uncanny realism, perhaps. But no AI currently in existence UNDERSTANDS anything. They are only sophisticated difference engines that evaluate patterns and regurgitate their training material. Ask them. Even they will tell you that they don't understand a single word of any language. Under the covers, they don't even analyze words; they analyze tokens - word components (or, in the case of images, pixel values, color maps or similar image representations). They use vast datasets to train, tease out patters through repetitious learning, and then regurgitate those patterns based on requested parameters. At no point in any of this process is the program or the hardware cognizant of what the tokens or the words or the patters mean... what they actually represent. AI is an incredibly complex and capable, but epically ignorant, mimic. AI is like a person who's been blind since birth talking about color. That person can repeat what they learn from others about color (or other attributes of sight), but they can never relate 'red' to what the word represents (Yes, studies have reported that the blind understand color in pretty much the same way sighted people do. But, I disagree. Ask a blind person to tell you the difference between red and yellow without talking about light frequency or objects that exhibit those colors, and I bet you get much different answers than the sighted would give. They can't truly really relate to something they have never experienced. AI is the same.) If I feed an AI bad training data, I can get it to swear that the earth is flat, e and pi are rational numbers, or anything else I want. AI is a tool, and, like any tool, can be used to create or destroy. It is not conscious, it is not sentient, it is not an alternative lifeform... yet.
youtube AI Moral Status 2025-10-17T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzS8KT7ZDuqI2G3Vch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQCKayFyY3WhawYE54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8GZI8IqhoxuJGEZ14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyqHPc-7vavPFakvfN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx9ummNYck2XF5kE954AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy58_oLfM9_7T5SNal4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgydJrXosrGgJLSlp1x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMoYreWF0T7zeiEJx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBQK_de-bGqF20fOZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyU7TnR8mTM4Vs-VAp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]