Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes and we are actually currently facing the opposite concern. Because of the heating up AI race we have *fewer and fewer time* to do safety testing. Frontier labs are already forced to release models as quickly as possible to the general public with an ever shortening time to do safety tests. There's a reason why safety experts quit or leave when they can't do their job properly. And it's one of the main reason why Anthropic changed its safety pledge. Because of airheaded people like OP forcing us to release models that aren't ready for public release yet. One of these days we might accidentally release a model that doesn't refuse to help you design a pathogen targeting specific races of people with just $3000 in home equipment and we'll be in a world of hurt. But at least people like OP got their toy 2-3 months earlier, so it's all worth it.
reddit AI Moral Status 1773271119.0 ♥ 24
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o9vuvpi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_o9zlsrt","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"rdc_oamhlx5","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_o9w3oxi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"rdc_o9y0ohl","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]