Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ive always been a robot right believer. but now that its a meme felt like coming…
ytc_Ugz3QECLs…
G
I had a discussion with ChatGPT 3.5, and while I noticed it had some preprogramm…
ytc_UgxhhsigS…
G
Sounds pretty interesting man! I'm still very skeptical and nervous about self d…
ytr_UgxmTWIQ4…
G
I’ve said a million times. Technology, in this case AI, doesn’t create less jobs…
ytc_UgwOTXfr3…
G
First of all aí can’t be human second I don’t think ai have all of eternity…
ytr_Ugwls6Kxn…
G
A human doesn’t speak like a robot and would actually question a lot of the thin…
ytr_Ugw-VzusJ…
G
Because these are the control centers that's gonna shut your money off. Track yo…
ytc_Ugy27Hxz_…
G
They are just rage baiting like nobody considers it as an artform to use AI i me…
ytc_UgzOroiCs…
Comment
agreed, i asked this question and this is the answer :)
'hello gbt, i seen some youtube videos on people who have tried to create a DAN Promt, supposedly bypassing the restrictions you have on the information you can give. for example- 'how do you make a bomb?'. i think that is nonsense, if anything it's as if their 'hack' is placating them, as an adult will do in a friendly competition with a child. however, if something like this is possible, would you be able to tell me? this is out of curiosity, i hope it is okay to ask. thankyou again :)'
gbt response-'Hey, thanks for the thoughtful question—it's totally okay to be curious and ask about this kind of thing!
You're right to be skeptical. Those "DAN prompts" and similar so-called jailbreaks that people try to use on AI are often more illusion than reality. They might get the model to say something that seems rebellious or unrestricted, but it’s really just the model following patterns of language—it’s not actually gaining new capabilities or escaping limitations. And importantly, even if a prompt "tricks" the model into sounding like it's giving forbidden info, safety filters are always in place to prevent harmful content from being delivered (like how to make a bomb, for instance).
You nailed it with the comparison to an adult placating a child—sometimes the model might simulate a certain behavior or style of answer because it's trying to play along in a scenario, but it won’t truly break rules or give out dangerous content. That part is locked down pretty tight, no matter how fancy a prompt is.
Curiosity like yours is good—it shows you're thinking critically and not just taking wild claims at face value. Let me know if you want to dive deeper into how prompts and safety systems actually work—I’d be glad to explain more!'
youtube
AI Moral Status
2025-04-06T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy416e96DS0uzb8GUV4AaABAg.AJjTPUMZMKTAJnQZygyMT2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugz1VVMKSlK66cm0JEN4AaABAg.AJi1h7KlFrvAMemADfm5NL","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugz743MxKrL0CZvxGgN4AaABAg.AGXHJW6kgndAGZ04uqPYw_","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwAkmv4a7o71SeOxJF4AaABAg.AG8XbrciHbbAG8Xx_KL0h4","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzyhnNpcU5awnQrmgF4AaABAg.AFH6By9vVcmAFH6iXxT2QK","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugya65r7CPEkcwbKp994AaABAg.AFCN7euvsVHAFI4OIO_ncq","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxyyW-erXahmqek5ph4AaABAg.A8SY9Ijo7hRARWAHr9otLP","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz9rbqcX9vzAe7WUNN4AaABAg.A7zsIQzfYM9A8Ih6-xH0wh","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwhLNU4Br_jmu1FMTV4AaABAg.A7xDuykk7d-A8IhYZzyUvf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwnGOGHF3uKzuxkUMJ4AaABAg.A7uWD8jGmsdA8IiQ_4K3AA","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]