Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, because those models can't tell what the neural networks are doing. All it would do is explain how the model is is performing its tasks, but the task itself should still be to predict the next token. When you are asking the model to do something, it's not actually doing something, it's just writing text about doing something. If that text contains things that are unsafe, the model is writing those things because it thinks it's just writing what would be most expected to happen here. It's not an actual agent performing actions to achieve a goal.
reddit AI Moral Status 1750447122.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_myv43he","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mz8r3z9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mytgoyq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_myt4j6v","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_myth5dg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]