Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a software engineer who works with AI daily, and ~90% of my code is written by AI. That does not imply "minimal human oversight". I read every single line of the code before putting it out for review (often requesting many changes, because the AI isn't as smart as a human), and then another engineer has to review the code before it lands. Another AI also reads over the code and notes anything suspicious, and then our automated test system gets its hands on it to make sure it hasn't broken anything. While I'm certainly not suggesting that an ASI couldn't figure out a way to sneak malicious code in, it would have to be much smarter than current AI systems to do so. I don't know specifically about Anthropic's engineering culture, but this is standard in the industry so I'd imagine theirs is similar. I highly doubt they are vibe coding Claude.
reddit AI Moral Status 1773256268.0 ♥ 36
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_oa2bwfk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_o9zl5cw","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o9vtexd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o9wluvn","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_o9y8h8g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})