Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a GitHub Copilot and a Claude Pro subscription. I hardly ever use Claude (neither the extension nor Code). Even if it wouldn't guzzle tokens like crazy, it hallucinates too much, it takes too long and its performance has been totally unstable lately. I'm super happy with VS Code and Copilot with GPT 5.4.
reddit AI Jobs 1776964112.0
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohuioc3","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_ohurhq7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oi1ozau","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_k0ajx8g","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_k0f5we9","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]