Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if we selected some 3 or 4 humans, and we give them powers and resources to them make some plans for the future, to stop the AI. But since their job is to create a plan that an AGI cannot understand, they cannot talk to others about this plan. And their job is to be deceivers, at the same time, creating a plan. We can call them *Wallfacers*, as in the Buddhist tradition.
reddit AI Governance 1716787496.0 ♥ 189
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_l5uxame","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"rdc_l5vvkfi","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"rdc_l5vzh5a","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"rdc_l5ughy0","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},{"id":"rdc_l5uyz9e","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]