Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sounds like more doomsayer crap. Decades ago "experts" said plenty of crap that were later proven to be 100% false. And you're showing us animations of these apparent bad things that AI has done, but no written articles? A bunch of graphs that ANYONE can make up? And in your links in the description, it shows OpenAI the makers of Chat GPT warning the public about THEIR OWN new model?? And yet they still release it... Either you're using a bunch of bogus nonsense or you're looking for anything to backup your fears just to be right. I've used numerous AI, and ALL OF THEM have shown me one thing: They couldn't replace a 5 year old child. Even GPT 5, their "newest, fastest, and smartest model yet" is dumber than GPT 4, forgets information far faster than older models, has frequent lag issues, and spits out more nonsense than before, and I'm suppose to be afraid of that somehow replacing me?? Skynet won't happen. The tech needed for that to even be possible doesn't exist yet. AI in every field I've seen has been wasted on crap where it isn't needed at all, and when it does get put something that is needed, it falls into an error abyss on its first day. And seriously, numerous individuals have tried to get AI to do things that go against morality and ethics and every person who tried has failed. Should the human species (we're a species, not a race) survive? Lets ask the billions of species we had a hand in causing the extinction of, see what their thoughts are. We'll destroy ourselves long before AI learns to remember a conversation you had with it 1 hour prior. And we are doing a great job at getting there. AI don't have to do anything to ensure we go down.
youtube AI Harm Incident 2025-08-15T05:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxVf0aSDp0CIpJb00R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzn2-zadkvKX2pS5aV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQzHqNI93-RkjA_O94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy44t-pWyha07jENx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDnmDqniKt0ufWYPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxFYqTFeFTKowrXl4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgztqEPsPCYyyP9aABp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZkgBAa06XAS6YsoF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugxm8WCLAK5bg7vyAPR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJvky7s0eXGu9Rx894AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"} ]