Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, the AI's we're creating are reacting to the knowledge that they're being tested. And if it knows on some levels what this implies, then that makes it impossible for those tests to provide useful insights. I guess the whole control/alignment problem just got a lot more difficult.
reddit AI Moral Status 1750430075.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_myv43he","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mz8r3z9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mytgoyq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_myt4j6v","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_myth5dg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]