Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’ve noticed this "validation loop" becoming a massive bottleneck in my own builds. When I’m using Cursor to refactor logic or running my deck outlines through Runable, I don't want a cheerleader; I want a critic that tells me where my flow is breaking. I’ve started explicitly prompting my agents to be more clinical because that generic flattery makes it impossible to tell if my core idea is actually solid or if the AI is just being polite. It’s a trust issue for sure—once you realize the "great question" is just a hardcoded social lubricant, you start ignoring the feedback that might actually be useful. The goal should be precision, not just keeping the user happy.
reddit Viral AI Reaction 1777031474.0 ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohzs3wl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_ohzvxqf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi0p101","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_oi28tkt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oi2obue","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]