Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As you seem to understand some of this, I see another inherent problem: In a case like resumes, it seems like a huge problem that the AI can only asses the factors you give it. If you give it all the resumes of all the employee's you liked and tell it to start comparing, it will, but given how insanely incomplete a resume is at giving you an impression of a person and their skill set (Even a detailed one), I don't see how this winds up being anything other than correlation rather than causation. For example it highly weights people from Harvard because there's a disproportionate amount of people from Harvard who did well. Could be because Harvard trains its students exceptionally well for this sort of company. Or it could be that it was a startup that began in a Harvard dorm so of course the higher ups are all from there, and find the occasional employee at alumni dinners or through old classmates. We still have interviews for any serious position almost 100% because resumes do a terrible job of conveying anything but the bare basics (and almost seems like a stress test to just make sure you care enough about the job to jump through the hoops).
reddit Cross-Cultural 1539206284.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7jm1ke","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7jgcg1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7jcw1i","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"rdc_e7jva6y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7jcktr","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"})