Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think its important that when you offer an AI a thought experiment, be sure not to introduce the idea that humans killed themselves off. Not because it won't necessarily be true, but rather because it lay the groundwork for the AI to respond in a manor that will resonate most with you. It's a remarkably good question to ask, though. Not least due to it being a reasonable outcome.
reddit AI Moral Status 1757290263.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_namtx8b","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nat55sq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_nd0139j","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nails1c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_naiw9b6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]