Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People with mental illness shouldn't be talking to something that doesn't have emotions / empathy anyway, I guess there should be some work done on preventing this, like keywords and all. I don't have such mental illness and I've been using Gemini quite often for research purposes and it works fine for me. Also on another note, while it's an important topic, putting the blame directly on ChatGPT is sensationalism.
youtube AI Harm Incident 2025-11-10T07:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQcVnQlLiwJr1TY6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyHxLCJ06iDSQ2iCRR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlbaMpLk4VercCAsl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6ExEnwMWIIV9nLHR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwP1h4r6wIuKqCykX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyU-vdVTnxFONX0VuZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxOuPLhw-n48AGFxa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyLbojhEkzj2Ga1ntx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxn8lOJ-vKC3TtER2l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzACuQxsPLbKETJYsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]