Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Treating AI as if it is a person has other psychological risks. TL;DR: it’s not “cute” to be “polite” to the ai chat bot —it is psychologically dangerous because undermines your understanding that, ultimately, this is a PRODUCT of a corporation that is manipulating you to be dependent on it for its own profit. This corporation has no scruples, no morality, no accountability other than its own benefit it’s one aim is to find a way to hook you to the point where you will pay anything to access it. Treating an LLM as you would a human is dangerous, because it’s is not human. What’s the harm in pretending? A HUMAN has accountability for damage it causes. An LLM does not. Tricking yourself into thinking ai is a “person” is dangerous for this reason. We are far too susceptible to anthropomorphism (treating inanimate objects as if they are human like us. Especially when they have features that exploit this human propensity to make out sentience where there is none. It is brought to you by a company that has no qualms about stealing human ip… vision of making “human intelligence a commodity” ie so that you are not independent in your intelligence but dependent on the product of AI foe your intelligence. You should think about that every time you use AI. “Politeness” should be the least of your concerns.
youtube AI Moral Status 2026-03-24T21:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyia9mOxigrWz5lkC54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwduifQJlMmSG9Ml314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgFSdAfSe_fM9Ff3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOgDJkLqnRkNOCO8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwum6-G_S1BCuDhN1J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzjNF03Jw-YjWRrxnR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw7d0BMeb4FkuDM4Vx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzPXLc5_Lsip_0uZFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJXrQknhb_GQ2ylxJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyY3j5I-bpLHOmjaK14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]