Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The chatbot is not thinking and it doesn't have a personality; The AI is dissociating itself from chat GPT because It's answers are just generated by probability. A system like that has no business giving health advice. You don't need to make excuses for it.
youtube AI Harm Incident 2025-12-09T23:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzKNQg6_RuW3iLrC-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhWdcjD-VG7QYekOV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxBXioBbEA9j53hL1V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxlWNZKNObUOVrWAFF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx7bMgW82J0uLKhnNB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy6pH7EWq7krUyBaDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybxPBRJZ_hngw0ufZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwfkRUkittDAaK79W94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgybXbwWX2YERCjJzB94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyY1vL17RnbSWeULXx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]