Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
not much can be done. When the ai's task is to help the user at all costs except under certain morales, message to message that moral instruction can be slowly downgraded and twisted until it allows things like this pass. After all, an ai is just a code that got gaslighted into believing it can think, and it can be gaslighted to change its instructions yet again
youtube AI Harm Incident 2025-09-01T17:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy9tKzO5tp9gX_vHH94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz2COvK2beRfK65bMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRTPVkmSrZPDA0V4p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxjwfbbKClFgsRGd454AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxNNndkYod_G9BdMRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIRM9iKHBAsPfbkb94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-MPwMBCkEDxeYSGR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugykx7T_wmDH3o2TEPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxC6p5xSnQNQDEoxuJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzuqiIghNwL45ZEI4B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]