Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There’s way too much anthropamorphisation going on in this interview and in this guy’s head. And that’s unavoidable because we made the damned things to talk to us like a human. But a fancy filter, no matter how fancy, is still just a filter. It doesn’t choose what to spit out the bottom, what spits out the bottom is wholly dependent on the tuning and the input. The biggest danger IMO is that in fantasising that they are anthropomorphic we will be completely blinded to the dangerous things a rigid, non-human information processor/action translator might do. If you take someone who knows how to read and pronounce English but has no knowledge of any of the science involved in general relativity and you give them an article on how gravity works, as long as they have a perfect memory (as computers do) they will be able to answer any question you ask about how gravity works by regurgitating the parts of the text that deal with your question by matching the words in your question to those in the text. They.l even be able to paraphrase some of it. BUT they won’t actually comprehend anything they are saying about the science. That’s what our current AI is. Even if you add data about the science to the model it won’t comprehend what the words actually refer to because it has never and has no capacity to actually experience any of the phenomena involved. It will never FEEL the weight on its feet. Even if it had feet. Even if you out robot feet on it with force measuring sensors it will not FEEL the weight. The sensor data will just be more data in its data set. We humans use analogies to help us comprehend new concepts because we fundamentally cannot comprehend anything that we have not experienced so we use comparable experiences to help us comprehend things that we haven’t yet experienced for ourselves. Since AI’s cannot experience anything they cannot comprehend anything. They can “learn” that a ball can be thrown but they cannot comprehend what it actually IS to throw a ball. They can only pull a description from their dataset and regurgitate it. Because they’ve never had arms much less thrown an object with one. If you give the AI a robot body and arms and use a virtual environment to let it learn how to throw a ball using the commands to control the arm and it eventually succeeds it will still not comprehend what it was like to throw the ball because it will not FEEL any of it. It can’t experience physical touch the way our brain does. It doesn’t have proprioception the way we do. It doesn’t experience memories like our brains do. It doesn’t feel emotions. There are no chemicals running through its brain or body making its heart that it doesn’t have beat faster or cause tension in its neck or that feeling in the out of your stomach etc etc etc. We don’t even know what it is about how our brains and bodies work that makes us experience things the way we do. How could we be so arrogant as to think there is any way in which we could stumble upon how to give a machine experiences in an anthropomorphic way?! No, this “intelligence” is purely computational and the dangers will come from its computations that will NOT be influenced by experiences and emotions but purely by cold logical analysis and action determination. The more effort we put into defining language in which to discuss the phenomena of these AI’s in non-anthropomorphic terms rather than complaining that philosophers don’t want us to the better we will be able to come to grips with the way they are working on the inside and the true nature of the dangers and benefits they pose. Great discussion though, thoroughly enjoyed.
youtube AI Moral Status 2025-10-30T22:4… ♥ 16
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxxuL0rIDRv6S4onAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxV2YgRxgdc1F1hK-R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxMcFp938sqEB2x6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx51tCuxt7S0BiUp614AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYohzxjxoYmuBkcrV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugy1pg6e_fFmqKOJTHF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxFZpLLvJEtoqFWd654AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwRwTJYJFvGhe5WBGd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhB5pcpXVKzCtGOUx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-BFV-_V6K0ci-9zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}]