Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Bingo. Fictional AI being tested in the ways it can recognize is in it's training data. So is the conversation of how people talk about testing AI. It doesn't mean it's been programmed to do it or not, but it has the training data to do it, so it's predicting what the intention is from what it knows, and it knows that humans test their AI in various obvious ways. It's still "emergent" in the sense that it wasn't the intention in the training, but not unexpected.
reddit AI Moral Status 1750467810.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mythpq8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mywuwu5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_myucsmk","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mywtrm3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_myvp1lx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]