Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At the moment, AI is not good enough at being a lawyer when it is not used by a lawyer because you need to know the law in order to ask the right questions and use the right prompts. Of course a lawyer doesn't need to write complicated texts from scratch now so he can work much faster which means that you need less lawyers. An other big factor are the laws: Of course AI companies could train AIs to be able to ask the right questions so that a client doesn't need to be good at prompting. However, is it worth it when the AI company doesn't want to be liable for the legal advice? Moreover, there are lots of countries that force you to have a lawyer once you are in front of a higher court and I highly doubt that they are willing to drop these age old rules. That being said: It will most likely change the industry a lot. Maybe small law firms will die out because previously big law firms were only interested in the big, lucrative cases. With a good (maybe custom made) AI I can see them being so efficient that they can do smaller cases much faster than any human (a top tier lawyer might need only a couple of minutes to proof read everything once he got an amazing short AI summary of the case + an AI generated solution).
youtube AI Jobs 2025-09-16T07:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzJjcJU0frv71lI06h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2JzkRHfL4kTQAV6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxu0TEeuCneKlC6s014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYxDkJUwYDqdgJ9c54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxwcTc8m_nV5alJnj94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYJTb__6Tbkw48nIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNI9-WROV7_lQY3xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy65N6_vRxrfo0jFKN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxK12luMbv6Xe22hpB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzIs9vBXmuH8Ow-kah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]