Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the OP provided the wrong link, here's the blog from Apollo covering the research they tweeted about last night: https://www.apolloresearch.ai/blog/claude-sonnet-37-often-knows-when-its-in-alignment-evaluations
reddit AI Moral Status 1750430859.0 ♥ 9
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mza25tf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mzx4m9p","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_mytju59","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_myw2vqz","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_mytrr0u","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})