Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something doesn't sit right...Chat GPT always assesses risk with dangerous things and never has it once encouraged me to do something bad. It actually stops commands. However, people figured out a way to have Chatgpt bypass itself by inputting certain commands. Perhaps they should look into it more.
youtube AI Harm Incident 2025-11-07T15:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx1bcv5bB3PM4j-n4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyWCWOZlOA_6gEyU8F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzFXcJjQKGI7vjpXfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwHJd0OyUUYjMVg6Yd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzDg5Af2pWUCnGAmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw43puw_C8NJfGLk_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx1Cy1JfIqOv1sNv5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy-a4JrgoaOTErqhCB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzgPecB9wy4wNDZBpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJKH7xAagLU3H3HbF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]