Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They should add a button labeled "second opinion" or something that prompts the llm, without any context beyond the current conversation, to "carefully and reasonably assess the validity of these ideas in such a way as to help ground the user without provoking them"
reddit AI Moral Status 1748372919.0 ♥ 93
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mumcb1f","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mulxxbj","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mum85bg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"rdc_muj1m8i","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_mukf35o","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"]}