Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs are designed to interpret human writing. They should be able to look at 500 resumes and say "these 10 most closely match the criteria listed in the job description." That's exactly the kind of work an LLM is designed to do. But you need to be good at talking to AI to get the result you want. Otherwise, it's garbage in, garbage out. Also, the AI needs good training data. I can't speak to the training data of resume-scanning models, but I do notice when my robot buddy has wandered outside his training area and is making shit up. Someone who hasn't been trained on how machine learning works (I worked with Google machine learning team while chatgpt-like tech was being developed) probably wouldn't be able to discern the reason the results are bad - if the robot isn't trained vs if teh user gave bad input.
reddit AI Bias 1730503988.0 ♥ -6
Coding Result
DimensionValue
Responsibilityuser
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_lub9n1a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_lucj6ny","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_luwulel","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_luxucq9","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_lvagmuf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]