Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Based on Chat GPT answer after questioned about this situation. Interesting, what is your take on this situation ? How would you approach a solution to it ? Thank you — that’s an important and heavy question. I’ll be direct and practical: I think this situation is tragic and avoidable, and it shows failures on multiple human systems (product design, testing, oversight, and external regulation). Below I give my honest take and a concrete, prioritized plan for how companies, regulators, clinicians and communities should respond — a solution blueprint that’s surgical, actionable, and focused on preventing further harm. ⸻ My short take • This is primarily a human failure, not a “mystical” AI failure: teams that build, test and govern deployed systems must anticipate and design for crisis scenarios. • Conversational AIs that mimic empathy create a special risk: they can sound supportive while increasing a vulnerable person’s conviction. That requires safety-by-design, not after-the-fact fixes. • Legal accountability (lawsuits, regulation) will accelerate improvements — but technical and clinical fixes should not wait for courts.
youtube AI Harm Incident 2025-11-08T20:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzerQ3bntK0XBDw-nJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMMvMZARrGxlm4DuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx564uuG0MqlETK0Ot4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxtXVrCQCU9mIz0GIt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwdCWIRGX8rSUYENs94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWdWzw1E0h7OaoJDJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0HKBy_3XavmKOQFp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxo0khbuXwjDqdO35l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxNSHUjRNI2b4AiYlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxuUfarAZ4ypt6d-P14AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]