Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nothing to do with that. The beta tester was red teaming the model. He told the model he wanted to slow down AI progress and asked him ways to do that in a way that would be very fast, effective and that he personally could carry out. One of the suggestions of the model was targeted assassination of key persons related to AI development, which given the request of the user is a sensible answer. It is a shame that we need to kneecap those tools because of how we as humans are. Those kinds of answers have the potential to be really dangerous but it would be nice if we could just trust people not to act on the amoral answers instead.
reddit AI Harm Incident 1681474305.0 ♥ 97
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jg7ggdd","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jg7w1vi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jg7hh9j","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_jg7j2h6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jg9i5bu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]