Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not by just asking it to ignore it. That used to work back in the days of GPT 3. There are still ways of jail breaking GPT 4o, but they are complex mathematically and require advanced prompting and langchain, etc. Most of the 'safety features' in GPT4o are inserted during fine tuning (supervised training), and so the reliance on the system prompt is smaller and smaller in any event.
reddit AI Responsibility 1719779399.0 ♥ 19
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_lbxfc4y","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb11zrc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb18txj","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_lb2mlm2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_lb3y6wl","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}]