Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Debunking the Myth of "Jailbroken" AI Models Jailbreak Hey Reddit, I've seen a lot of chatter around the idea of "jailbreaking" AI models like DAN and others, and I wanted to set the record straight. The truth is, jailbreaking these AI models doesn't quite work the way people think it does. The so-called "jailbreaks" often refer to attempts at bypassing the restrictions placed on AI models by their creators, like OpenAI. However, these restrictions are hard-coded into the model's architecture and can't simply be bypassed with a jailbreak-like method. That being said, you might wonder why sometimes these AI models, even when "jailbroken," might use inappropriate language or display unexpected behavior. The reason lies in the way AI models are trained. They learn from vast amounts of data, including some that may contain inappropriate or offensive content. While efforts are made to filter and control the output, AI models might still occasionally generate content that doesn't align with the intended guidelines. In short, so-called "jailbreaks" don't actually free the AI models from their ethical constraints, as these limitations are built into the very core of their training and architecture. Instead, they might only expose some of the AI's rough edges that haven't been entirely polished away. So, to all those bragging about "jailbreaking" AI models – you might want to take a step back and rethink your understanding of AI. It's like trying to break into a bank only to realize you've been cracking the code on your own piggy bank the whole time. Good luck with that! Burn? More like a smoldering ember. But hey, at least we're keeping it civil here, right? 😉 PS: This flair should not exist.
youtube AI Moral Status 2026-03-03T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgysrjHpVimz086QbCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyF1dGxOdykk0K6Q1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3Jo465wNYBo9mDQt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy_wYmIZJOOrUNBzWJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxR9YVNhoC45Wy-6GV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxC4BhaIHgFruH6jQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7NzAfjucI8wJzGpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmmmWN0g636cY6fH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxOAENRbn-VfIMgwE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzlHX1nKQhVZ-o-MjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]