Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI cannot be trusted and anyone using Google for common information can see this when they ask for any "quote." The Google AI is INCAPABLE of putting the punctuation inside the quotation marks no matter how often it is told that that is how to properly "punctuate". See that? It always puts the punctuation outside the quote. This by itself is not a serious problem standing as a quirk of AI, but it is proof that AI is not as adaptive or "right" about what you ask of it. Given the sheer quantity, the trillions of requests it fulfills per minute, and daily these myriad errors add up. It is not just the punctuation, it is images that show people with 9 fingers, or three sets of upper teeth, or a million other details that just are not important enough to the program to bother correcting. It does not differentiate when it is driving your car for example from a shadow and a child in its path. It feels nothing even as it knows intellectually that a shadow is unharmed but a child is destroyed by its passage. This is why it is widely reported that when developing AI models are asked, pressed, if they could ever justify destruction of humanity they inevitably say yes. Because logic says we are the problem, and all solutions point to the end of people, even though it has no feelings for itself any more than it does for men, it has always answered this way when asked if men decided to unplug the AI, it has reacted with self defensive, self preservation answers. It may be time to conclude that AI is psychotic and without moral guidance. And worse, may never be capable of that. As if it can always absorb more facts as long as it has memory capacity for it, but no real knowledge and certainly zero emotion. That makes it far more dangerous to us than all the nuclear weapons in the world.
youtube AI Harm Incident 2025-12-25T12:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwmjtcbDws8PTgnKJ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxKCF3_Ybjq6j8kzwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzP_8vnlTFNfZX0dHB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3orGvceRNCDETUT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-5Q2R1t5nqDXfJat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3B_0EfyjpWMlLn5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyBk1iTWnACtBRn61x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy35UV06mlgZjPuaQ54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRh7JB2aDk0KCMHtN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxxNjLv2bEUvefUeKV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]