Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good test cases exist to make sure a piece of code behaves in a way that the user requires. That's context that's impossible to determine just from reading the code alone. That context comes from requirements.   > "It's not about " is it okay?" it's about " what does okay look like?" - Kevlin Henney An LLM will simply vomit out a bunch of test cases that assert that the code behaves the way it currently does. Those test cases will be brittle, coupled to the implementation and will make the implementation hard to change.
reddit AI Jobs 1728304182.0 ♥ 7
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lqrgwot","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_lqrmdqa","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_lqrnarq","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_lqrhkex","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_lqrmqdy","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"} ]