Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yea, right now very long context increase the amount of hallucinations by a lot, I've noticed it first hand even in simple conversations, let alone giving my whole codebase to an llm
reddit AI Jobs 1754606586.0 ♥ 7
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n7h12le","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_n7id66b","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"rdc_n7hlinh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_n7l6zjv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n7i3skx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]