Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can say 100% that I've told people about some code function or software project that either doesn't exist or was incorrect. Sometimes I have a brain melt and make all sorts of stupid mistakes when coding. It's correct that people should validate gpt-4 output, but that's true of anything on stack overflow etc. What's important to realize is that the code it presents is a single-shot first draft with no testing. If you can find a developer that can do that at even 1/10 the speed then you should hire them on the spot. Again, I agree with the main post that the llms present hallucinations in very convincing ways, and so nothing it says should be trusted without verification - or accept the risks and go for it.
reddit AI Responsibility 1682532801.0 ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jhtzcmw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_jhtda7l","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_jhsx5xp","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"rdc_jhsi9sa","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_jhsuo8i","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"} ]