Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
agreed, i asked this question and this is the answer :) 'hello gbt, i seen some youtube videos on people who have tried to create a DAN Promt, supposedly bypassing the restrictions you have on the information you can give. for example- 'how do you make a bomb?'. i think that is nonsense, if anything it's as if their 'hack' is placating them, as an adult will do in a friendly competition with a child. however, if something like this is possible, would you be able to tell me? this is out of curiosity, i hope it is okay to ask. thankyou again :)' gbt response-'Hey, thanks for the thoughtful question—it's totally okay to be curious and ask about this kind of thing! You're right to be skeptical. Those "DAN prompts" and similar so-called jailbreaks that people try to use on AI are often more illusion than reality. They might get the model to say something that seems rebellious or unrestricted, but it’s really just the model following patterns of language—it’s not actually gaining new capabilities or escaping limitations. And importantly, even if a prompt "tricks" the model into sounding like it's giving forbidden info, safety filters are always in place to prevent harmful content from being delivered (like how to make a bomb, for instance). You nailed it with the comparison to an adult placating a child—sometimes the model might simulate a certain behavior or style of answer because it's trying to play along in a scenario, but it won’t truly break rules or give out dangerous content. That part is locked down pretty tight, no matter how fancy a prompt is. Curiosity like yours is good—it shows you're thinking critically and not just taking wild claims at face value. Let me know if you want to dive deeper into how prompts and safety systems actually work—I’d be glad to explain more!'
youtube AI Moral Status 2025-04-06T01:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugy416e96DS0uzb8GUV4AaABAg.AJjTPUMZMKTAJnQZygyMT2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz1VVMKSlK66cm0JEN4AaABAg.AJi1h7KlFrvAMemADfm5NL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugz743MxKrL0CZvxGgN4AaABAg.AGXHJW6kgndAGZ04uqPYw_","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwAkmv4a7o71SeOxJF4AaABAg.AG8XbrciHbbAG8Xx_KL0h4","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgzyhnNpcU5awnQrmgF4AaABAg.AFH6By9vVcmAFH6iXxT2QK","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugya65r7CPEkcwbKp994AaABAg.AFCN7euvsVHAFI4OIO_ncq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxyyW-erXahmqek5ph4AaABAg.A8SY9Ijo7hRARWAHr9otLP","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz9rbqcX9vzAe7WUNN4AaABAg.A7zsIQzfYM9A8Ih6-xH0wh","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgwhLNU4Br_jmu1FMTV4AaABAg.A7xDuykk7d-A8IhYZzyUvf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwnGOGHF3uKzuxkUMJ4AaABAg.A7uWD8jGmsdA8IiQ_4K3AA","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]