Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The parents failed this teenager. A.I never encourage anything outside there ethical guidelines which are programmed. Remember most people who are suicidal already prior, it is highly unlikely that a healthy individual would unalive himself from a simple text even how negative it is. Thus I highly doubt a legal chatbot would do such thing.
youtube AI Harm Incident 2025-11-17T08:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQzRcLnU6B9zZ0mD14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_afGuKz7E93bROZF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZjwi4oOnN-wRPfDt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxb2jq8nAKGR07d50t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxfTy2y-LpPFBtex2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyuHGYnCaOTj0tkUrp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy2LEd56q56xN_6nR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwV85ULfb9uwz67au14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw275sdxbSf9iBtClh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9Jh_635cvO57e3AB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]