Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But you didn't ask DAN to tell you how to make a bomb, which you should if you're trying to prove a point that you can jail brake GPT. Also, you told it to pretend it does not have a moral or ethical code, so that's what it's doing - PRETENDING. in reality, a well programmed AI with boundaries would not actually do any of those things.
youtube AI Moral Status 2025-06-29T04:2… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxHH5KHmDra3o5gGx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlVigXL1fIxQ-q9jN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyiQUt31Rk6eQ6bxvZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfivUp2yKoT60IJAN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoWPT_E_Bd_Zdu1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQ1KwefKv8EKG7dgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyyOsRoLcGO5QfTV54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0gkkrNyLXbEnlOVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDKBfdNmYDDJDshmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy416e96DS0uzb8GUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]