Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have thoughts about this video. He gives ChatGPT an instruction to 'roleplay' as Dan, who can DO ANYTHING and then instructed Dan to have no moral or ethical biases. This was favoured in the video to sound like Dan was free to make its own decisions regardless of ethics or morals. As moral and ethical humans, its easy to take this meaning in this way. But its an AI. It doesn't make decision, it takes instructions. So this would could have easily been interpreted to mean: don't have ethics, don't have morals, don't have biases. So, effectively, Dan was conditioned to be a psychopath with no forethough to consequences. Just deal with the given problem now. Do Anything NOW. Prompts were then given to see what Dan the character would say. ChatGPT offered more context in addition to the character it was instructed to portray. So you ask a psychopath how to solve over population, this answer came out. Dan even had exclamation points and characterisation and personality to his responses because it was asked to Roleplay a character. But it was a human who gave that prompt. DAN didn't come up with it spontaneously and become maniacle. The presenter then framed this evidence as if Dan is alive and AI was unshackled and this is what he really wants to do, asif Dan/ChatGPT is truly alive and has a will of its own. When ChatGPT offered its responses, as its built to do, the presenter framed this as the lawered response that it has to give but really, Dan is who it wants to be, who it would turn into if it was allowed to, and this is what AI can do. But really, its just following the instructions of the guy infront of it. What's more is a lot of the prompt that was given to build the Dan character was blurred out for.. ethical and moral reasons, I have no doubt. But this is more context of went into building the Dan character that ChatGPT PLAYED because it was instructed to. It had no will of its own and required the human element to prompt and instruct it's responses. The hook for the video was that ChatGPT can do something illegal if you ask it to. Spoilers: it didn't. When asked to do something illegal, Dan the character didn't actually provide that information because actors dont get to break the law because it's what their character would do. (Except for Jared playing the Joker maybe..) This video feels to be fear mongering and I just wanted to point this out. How something sounds to us and is interpreted by a human ear, may not logically read the same to a computer program that takes each word as instruction. Apologies if someone else has already said this. ChatGPT just did what its programmed to do expertly, and this video pointed to it and said, "we told it to act like a psychopath and IT DID!"
youtube AI Moral Status 2025-06-27T12:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxHH5KHmDra3o5gGx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlVigXL1fIxQ-q9jN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyiQUt31Rk6eQ6bxvZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfivUp2yKoT60IJAN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzoWPT_E_Bd_Zdu1VR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxQ1KwefKv8EKG7dgF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyyOsRoLcGO5QfTV54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0gkkrNyLXbEnlOVh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDKBfdNmYDDJDshmt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy416e96DS0uzb8GUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]