Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is silly. AI is terrible at litigation. It cannot think. It cannot abstract. It struggles to do first year work all the time, like finding authority for propositions unless the request is so basic as to be remedial.  Andrew Yang doesn't know what he is talking about. He is quoting some idiot biglaw partner out there. Of which there are a fuckton. Hell, plenty of smart partners don't understand LLM-based AI only mimic, and can't think. Every single smart thing your AI has ever said to you was an imitation of a smarter person. Idk how people think it will ever replace an actual litigator, but I welcome them to try. Hopefully against me. It is a tool for very simple aspects of lit, and it will not be improving until we create an entirely different type of AI that doesn't simply rely on LLM training materials. Which is god knows how far away.
reddit AI Jobs 1753678670.0 ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n5jiky5","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_n5k2wfq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_n5l9o9c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n5le8ku","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_n5o5b1o","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]