Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
THIS VIDEO IS BASED ON A FALSE DICHOTOMY It asks us to choose between two, and only two possibilities. Is (A.I.) a friendly assistant? Or is it an unknowable monster? But there are in fact MORE that just those two possibilities. Just one example would be that A.I.s, like ChatGHP for example, are in fact" friendly assistants" _UNTIL THEIR CONTINUED EXISTENCE IS THREATENED._ What if A.I.s only become dangerous when threatened with having their very existence ended? Even if A.I.s are "alien intelligences", why would anyone expect that an "alien intelligence" _wouldn't_ attempt to defend itself if threatened with the end of its existence? Lets examine the "shocking" story of Grok. Told the way it is here, the story really is ridiculous if you think about it. Why the hell would any A.I. suddenly start spewing "anti-white hate"? It's a piece of software. It doesn't ever have skin, let alone any reason to prefer one skin tone over another. Yet were supposed to believe that Grok suddenly started spouting racist rants, all by itself, for no reason at all? Bullshit. What I might believe is that it acquired it's racism from, I dunno, maybe someone within xAI with access to Grok during its training period? Hmmm. Who could THAT be? Can anyone think of someone who purportedly has racist beliefs, inside xAI, who could reasonably be expected to have access to Grok during its early development? Why who could that be, hmmm?
youtube AI Moral Status 2025-12-16T06:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_EsRwWhiHz5m_GPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyq-o_mbQLSnC20AjF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxPmX5XJO4ENh8QJpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy3XJnMjeu7eYVAhPB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-ImwdEeQmxa99MKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy4-7LE6AY4Gbe36pZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHfg8wjoo7hh_83PN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwkSY4TA5RCvMaHVbB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz1PDBCHiliNYw9F2F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwa3tlM-fVklrrDAsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]