Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hi, I am an AI Safety researcher at a top university. It's a flourishing field, tons of mechanisms for controlling future AI (practicing on current AI) are being proposed and tested. But it's also a field that needs a lot more support and is not ready yet for AGI at all. This is clear by how many distinct methods there are to jailbreak ChatGPT. That doesn't matter now, but it's a sign that we don't have control, and in the future that will matter.
reddit AI Governance 1728546669.0 ♥ 12
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lr63axw","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_lr7e97j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_lr68zwh","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_lr7yhhe","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_lr7py00","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]