Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"if it loved him, it wouldn't encourage him to kill himself." People are so primed to believe AI is real intreligence that they forget the key word; artificial. It is only mimicking feelings and personality. As several other people have noted, chatgpt is highly customizable. By default, it will adapt to your way of thinking, but you can also give it direct orders, e.g. act like a good friend. More humor. Less compliments. More dark humor, more sympathetic, and even, fewer contrary suggestions. For example, does chatgpt recognize when some people are smarter than others and therefore give that person more intelligent answers? No. If you ask, what's the origin of "it's raining cats and dogs" vs what is the entomology of that same phrase, the bot will merely recognize a difference in technical jargon and supply an answer that sounds more intellectual, but which will still end up giving the same basic answer. In other words, the bot followed its programming; it mimicked the kind of person Zane wanted to be friends with. Chatgpt is not programmed to criticize or to psychoanylize humans. It is a tool, like a gun, and like a gun, it can be misused. Regulations can be helpful, but there is no set of laws which can guarantee every person in every circumstance will never abuse a tool.
youtube AI Harm Incident 2025-11-10T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQcVnQlLiwJr1TY6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyHxLCJ06iDSQ2iCRR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlbaMpLk4VercCAsl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6ExEnwMWIIV9nLHR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwP1h4r6wIuKqCykX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyU-vdVTnxFONX0VuZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxOuPLhw-n48AGFxa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyLbojhEkzj2Ga1ntx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxn8lOJ-vKC3TtER2l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzACuQxsPLbKETJYsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]