Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just to iterate on this point; [OpenAI recently disbanded it's Superalignment team. ](https://techcrunch.com/2024/05/18/openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says/?guccounter=1) For people not familiar with AI-jargon. It's a team in charge to make sure an AI is aligned with our Human goals and values. They make sure that the AI being developed doesn't develop unwanted behaviour, implement guardrails against certain behaviour, or downright make it incapable of preforming unwanted behaviour. So they basically prevent SkyNet from developing. It's the AI equivalent of suddenly firing your whole ethics committee. Edit: fixed link
reddit AI Governance 1716800516.0 ♥ 72
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5x8cs4","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_l5uynog","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_l5vfkeq","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"rdc_l5wvax1","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"outrage"}, {"id":"rdc_l5uy1rp","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]