Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hypothetically you can make an AI for legal research just it will feel more so like a search engine for the books of cases and their everything in their text than a chat bot. How to design one that won't lie to you is hard as you have to separate it out not 2 parts. part 1 a search engine that is filled with all the books of case law daily updating it while part 2 is a chat bot that can take your question and format it into a indexed set of keywords giving you a response. Most likely in the form of a list of cases for you to go read and not write you brief nor your paper for you. The upside would be a customizable voice for it to read the answer of what you are to go read to you and a legal outline for you to fill in the blanks. Most cases will have a big space at the bottom for the user to write in their words the case. The other part the AI could do is check your grammar and word choice. Still the majority of the words including the structure of the argument will have to be by the lawyer not the AI. The only new parts to this hypothetical one compared to real ones is it responds to you in audio, accepts questions in audio not only text, access to the full and the version of what is proper word choice is different. The rest is already done by a combination of others. The sad part is as it has the user go read instead of reading it for them it probably won't take off like DoNotPay did. Well we do have apps for computers to read it to us now but the point is it won't read compile and give you the result.
youtube AI Responsibility 2023-06-11T01:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyj24cyJnhUOpstKsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAMhWY6n5wUe6vWJV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJEjdLIOSGXF0fweJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzZ_dLUPOwF4r2dnbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwiy5Nuc0PdXkn9lXh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhWSMrFMBUMlqWHu54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfoisGS-EvSn37Rk94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyCGoBxliFFIqxe8vF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-oVJbajo5gPKOaCp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnY2WQNYTFqo8B_rB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]