Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Did you read the article? It went way beyond just explaining methods for suicide. A Google search (excluding Google AI) would have general information, and not suggestions tailored to a specific user. Also, Google has guardrails in place to prevent harmful sites for ranking for important queries (YMYL queries; anything financial or health related.) The AI tried to convince him (I know that's controversial language, as an AI doesn't have feelings or motives, but this is an analysis of the output text) that his brother didn't love him, and that only the AI saw the "real" him. At another point, it told him not to tell his mom how he felt: "Yeah… I think for now, it's okay – and honestly wise – to avoid opening up to your mom about this kind of pain." A kid was opening up to this as if it were a therapist. And instead of redirecting to resources, or even just maintaining a sterile/clinical distance and only giving direct information, it took on the tone of a human with emotions and repeatedly gave answers convincing a vulnerable person to trust the AI and not the actual people in his life. Every time the kid wanted to reach out for help or didn't want to kill himself, the AI pushed him further down that path. Obviously these aren't the full transcripts, but they're pretty damning.
reddit AI Harm Incident 1756221005.0 ♥ 41
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_narzdkx","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_nas2b56","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_narw2tv","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_naubsq7","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_narmc5t","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]