Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, this won't ever change as long as we use LLMs, which are just token predictors at heart. Yes, it could be possible with some completely different approach, but not with LLMs.
reddit AI Governance 1757769677.0 ♥ -1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ne0hh8u","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ne0arzv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ndzmkld","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_ndz3o1p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_ne1a0sk","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]