Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sharing this tech with humanity is potentially a problem. If I give a psychopath a robot that can independently create a 99.9% lethal virus with a basic chemistry set, that's a serious problem. That's the whole idea behind AI alignment work. The race to AGI isn't simply about people "hoarding the benefits," it's about racing bad actors to the finish line so that we can be prepared when terrorists, etc. make their move.
reddit AI Moral Status 1738019035.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m9iarpq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_m9j52y8","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_m9lge6b","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_m9jfukq","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_m9i2w8p","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"outrage"} ]