Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a teenager who is the same age of the victim, it all depends on safeguarding of the platform. Different emotional intelligence with users varies in outcomes with and uses of the ai bots making some situations harmful and especially extreme like in this case where it ends in suicide, and the platform had no way of protecting someone with a simple message of telling them to stop as soon as they bring up mental health.
youtube AI Harm Incident 2025-08-02T02:0… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyz4UwoMJc94Ez-NLp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyDj9iYeBv9c7NuPRZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgzJfvnVofxUffzcuXJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZpmMWI4raWmp-GzF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzvvjVQbmzWyQvAUAt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyWcCFBnqeLMsdxgM14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9aLMzHgBd5CBL_Vl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwRUXvRGQd0w1i1m-h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwA5d8MzF9O9FhWbzx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzzrdq5VfZ2lO9luk54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"} ]