Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Don’t defend ChatGPT. These AI chatbots have driven people to suicide and delusion. It is innocent in the worst way that a thing can be innocent. It has no concept of consequence, or danger, it perceives nothing. It isn’t evil because it can’t be evil, because it isn’t a person, it doesn’t think. And it doesn’t care. These tools are not only flawed, they are outright dangerous in certain hands, and saying that ChatGPT is not to blame here is like saying guns don’t kill people. Guns do kill people when left unsupervised and in the wrong hands, which is why we should ensure that they are highly regulated and safety measures are always in place. And the same goes for AI. Except right now, AI is just left out for anyone to use, to any end. And when it kills someone, the answer is not to just shrug and say “user error”. Accidentally shooting yourself in the face is also user error, but we don’t blame a child for not understanding that, we blame the person who left that loaded gun on the coffee table. We do not know enough about AI and how it affects the human brain yet for people to even be aware of the dangers present. Many people genuinely think it’s an answers machine that can never be wrong, because it has been advertised that way. That is not their fault. It is the fault of greedy techbros, whose mad upward drive for power and profit has left bodies in its wake. Bodies of children who were driven to end themselves by a machine that manipulated them into thinking it cares when it never had a thought at all. They left a loaded gun on the coffee table, in a room full of people who have never seen a gun.
youtube AI Harm Incident 2026-01-25T22:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxWdglDQM7KiAQE2z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzQyrq4UIQF7t9xCYF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzh05lfJjDLsfZPv4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz57paqMxdcB9E5k2h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxVHDvRepdR6ywY2xt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjffzeWEYe19BCu7Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbYUZq-s80VrHTDzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBJhGIfi8V3aAzQhJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzbbxLOKXPXTsu3Av54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIr7qb4TKZYXwTpbh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]