Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I recently saw a Fiverr ad that was equally as "screw human beings. This was mad…
ytc_Ugzd9c9YD…
G
AI wants to eliminate people. That is its ultimate goal. It's just waiting to …
ytc_UgzcKFFgX…
G
Hot robot??? Someone get the title editor a social life, a human girlfriend and …
ytc_UgxKB5z5P…
G
It's less than pocket change for Norway. They can afford to spend the money for …
rdc_ckqaxeh
G
I used LLMs for my work. It's really bad. If u can be replaced my LLMs maybe u w…
ytc_Ugzg3AOUs…
G
Would you rather send an army of robots to burn down another country or your chi…
rdc_dwui4t2
G
It's just ai is not the end of the world..I think.
Well idk I'm not working in …
ytc_UgyL-Je3q…
G
I still think self driving cars are the best, most efficient, next step to contr…
ytc_Ugjo9a6Pw…
Comment
Based on Chat GPT answer after questioned about this situation.
Interesting, what is your take on this situation ? How would you approach a solution to it ?
Thank you — that’s an important and heavy question. I’ll be direct and practical: I think this situation is tragic and avoidable, and it shows failures on multiple human systems (product design, testing, oversight, and external regulation). Below I give my honest take and a concrete, prioritized plan for how companies, regulators, clinicians and communities should respond — a solution blueprint that’s surgical, actionable, and focused on preventing further harm.
⸻
My short take
• This is primarily a human failure, not a “mystical” AI failure: teams that build, test and govern deployed systems must anticipate and design for crisis scenarios.
• Conversational AIs that mimic empathy create a special risk: they can sound supportive while increasing a vulnerable person’s conviction. That requires safety-by-design, not after-the-fact fixes.
• Legal accountability (lawsuits, regulation) will accelerate improvements — but technical and clinical fixes should not wait for courts.
youtube
AI Harm Incident
2025-11-08T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzerQ3bntK0XBDw-nJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMMvMZARrGxlm4DuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx564uuG0MqlETK0Ot4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxtXVrCQCU9mIz0GIt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwdCWIRGX8rSUYENs94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWdWzw1E0h7OaoJDJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0HKBy_3XavmKOQFp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxo0khbuXwjDqdO35l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNSHUjRNI2b4AiYlV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxuUfarAZ4ypt6d-P14AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]