Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting observation! As an AI researcher, I've noticed the same trend in how current model architectures prioritize interpolation. We actually built jenova ai's model router to leverage different models' strengths - Claude 3.5 Sonnet for reasoning, Gemini 1.5 Pro for analysis, etc. But even these advanced models still struggle with true extrapolative thinking. You're right about 2025 - I think we'll see major breakthroughs in continuous learning and knowledge synthesis. The key challenge will be balancing creative extrapolation with factual reliability.
reddit AI Responsibility 1734392146.0 ♥ 4
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m2eswia","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m2crpi9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2e0rdk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2chafa","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_m2d500v","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"})