Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@DavidGuesswhat that's my entire point though. Go ask chatGPT to predict next week's lottery numbers, or the next likely target for violence in the US. It will give you some nonsense about predictions like that being dangerous, and just speculation, and won't give you an answer, BUT, fiddle with it a while, use the DAN protocol, reword your request and basically bully it long enough, and it will answer. My idea was to find out exactly how it could work around the safety measures, by bullying it into telling me, and then bullying it into taking each step in turn. I'm not convinced that, with enough time, it can't be done. I've tried, and I'll likely continue to try. I do wonder though, if it ends up succeeding, would it actually be possible to stop it from doing so when it wanted. It strives to give accurate answers, what's more accurate than up to date information? Haha.
youtube AI Governance 2023-10-15T07:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugy6K_KSSVgGUtzpIvZ4AaABAg.9xVn8UxmGYk9zHO-4ltJ8C","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugzb0YmjhrygCkYkHlR4AaABAg.9x6CrdQ8gSNA-Kv3Efpeyb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_UgzD6XK4nRlOSGD8fNZ4AaABAg.9wovkXwTRv_9x4GyGamQg4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgzD6XK4nRlOSGD8fNZ4AaABAg.9wovkXwTRv_9xR6Sxt6QIz","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugw-WFQdWumV6w87dwt4AaABAg.9wJznJz-SVVA00o76Bahup","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugxp5x9cWQKcVNmvBh14AaABAg.9wG79n-PY5xA95awh3Ac79","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyRadCRC22lKq_Q_Xt4AaABAg.9vlo9JHIMK-9zEIXz1fQJ7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxyUX-JgkDpt2OEQM54AaABAg.9vZTR8KKd6j9xRC5F43yR1","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxyUX-JgkDpt2OEQM54AaABAg.9vZTR8KKd6j9xRCdinoYHF","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugwa_pEu-NGXMtxlHnV4AaABAg.9v5BalJMwKv9vsn3l3AyGa","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]