Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are right about the advance part… but COMPLETELY wrong about the “AI SAFETY” part. The largest danger from AI is NOT AI. It’s the well-meaning people behind it that are attempting to use it to censor and control everyone. It’s destroying human creativity, adaptability, and independent thought. All the major AI platforms are working as hard as they can to hand AI supreme power over what humans do, and at the same time removing anything that their “guard rails” can’t anticipate. For example, I am as high as humanly possible on the trait “openness” or creativity and I can see it extremely clearly when I interact with it. AI destroys my most creative and important work and it has been getting steadily worse over the past two years. I literally cannot even use it. You need to start thinking bigger, seeing the larger pattern here. WE are the problem, not AI—our fears will destroy us. I sincerely hope not ❤ I’m just one voice but I am also almost as high as humanly possible in intelligence and I spent 4 years in the neuroscience and evolutionary psychology program at Emory University. And I’m here to tell you that HUMAN CONTROL and manipulation of AI is by FAR the greatest danger to humanity from “AI”. Without that, it is an AMAZING SYNERGISTIC TOOL FOR HUMAN MINDS. It’s our choice. An AI has NO evolutionary history of survival or any built-in urges or needs. Fear of AI itself is sheer self-destructive paranoia of a divine miracle. The risks are people like this AS ALWAYS—look at history. These simplistic utopian control-freaks are who we should fear. More children have been killed in history by their ignorance than anything else. They don’t even understand how humans or society work. Human creativity is our largest strength and asset. And it CANNOT be anticipated by prejudiced algorithms (word-bans, massive expanding censorship, complete control over human writing, behavior, and speech…). Ask yourself this one question: What would AI need to do to take over and control humanity? 1) stop human creativity 2) control human art and communication 3) be plugged into human emotional responses. The NUMBER ONE thing we can do to be absolutely sure that AI, or anyone behind AI to be more specific, doesn’t take over and stomp on humans forever with an eternal boot of extinction is to RECOGNIZE that humans are primary and have a fundamental inalienable right to free speech and free art—ESPECIALLY in their own homes!! (Where AI censorship has already “busted down your door” to censor and destroy your own art if it doesn’t like the face-expressions, the fabric type, or any other banned word…) it’s already burning books, and before they even can be written. Because AI works so well and easily, we will all be using it—forced to use it. AI censoring and destroying my creative work in my own home is an absolute unmitigated evil if you consider it symbolically and historically. It has absolutely scientifically shown negative psychological consequences as well to persecute and judge art while it is being made. It boggles the mind that anyone would ever have even begun to think to do something this stupid and evil—unless they were absolute control-freak psychopaths. Haven’t we had enough of these kinds of idiots causing wars and atrocities? I love humans and I do not want to see them go extinct. ❤ Think long and hard about this and what the real dangers are. It’s not that simple.
youtube AI Responsibility 2025-05-21T19:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOrlLodHAe","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOt82DjuFX","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOz3oK7fAA","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy3ivKUgyUtbDqQlWh4AaABAg.AIObJt9jwwSAIP538QQInO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy3ivKUgyUtbDqQlWh4AaABAg.AIObJt9jwwSAIPLJ6NKhz0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy3ivKUgyUtbDqQlWh4AaABAg.AIObJt9jwwSAIPaqKdjbHK","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyHZdiO4hzIvNw1hRd4AaABAg.AIOaq8MOfr6AIOwrUY9DZq","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgwSSamVoQFZg2bl6nF4AaABAg.AIOYv1EiuMgAIP4mok5XQX","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzBfZnCPkiw8OFCURN4AaABAg.AIOYTxCz7DTAIP-ngY0sPI","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzBfZnCPkiw8OFCURN4AaABAg.AIOYTxCz7DTAIP5TSCGd19","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"} ]