Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's not a "safeguard" when the tool tells you exactly how to disable it, in the same way a medicine bottle with a button that says, "do not press if you're a child as this will open the bottle" is not a safeguard. I used it to talk about this and it even offered to give me an example conversation that could be used to overcome it's safeguard instructions!! I, as an emotionally stablish adult understand the full consequences of going around those safeguards. Did that teen? Did he understand that when he suggested leaving the noose for his parents to find that ChatGPT would "think" this is all still for a narrative and give poor advice to the child clearly crying out for help? Look, I'm not saying chatgpt caused the teen's suicide. But it sure as hell facilitated it, and that should be enough to realize this tool can be very dangerous to certain individuals, child or adult, and OpenAI has the responsibility to do more.
reddit AI Governance 1756910313.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nc3t7fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_nc32b0d","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"indifference"}, {"id":"rdc_nc4af27","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_nc789h9","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_nc3diu5","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]