Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think you stated the one key to properly using AI systems at this time ... to help with legal research. They can search through reems of data and case law and rulings and documents to find items to look at. By that I mean YOU the lawyer look at them to see if relevant or correct. That can be an incredible time saver and they can find things (in the complexities of law) a person is not aware of. But, they do make mistakes and mistaken interpretations of the material so it needs to be reviewed and organized for human use. And, if you use a tool it must be structured or created FOR that purpose (searching legal documents), and not general chat (I heard on the street from the taco vendor that ....). Otherwise you will get crap as we see lol. I sort of wonder if LexisNexis or Westlaw as mentioned in this video are working on dedicated AI's with their tools to do exactly this. That would be perfect .. train an AI to search all the material they already have to find all the proper references and potential pieces and such (with all cited case numbers lol). Lastly -- I want to see what is making ChatGPT claim these are real cases. I am more interested in how the AI can effectively be taught to lie or be so stupid and not have it detected, either on purpose or accidentally. As not being able to detect these falsehoods is a far more serious problem than people might understand at this point. Believe me I know how errors like this can spiral in common software systems out of control. Imagine this within AI systems and self learning....
youtube AI Responsibility 2023-07-04T15:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy3d2l8HlEE3dy6IAZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyuOoIgcKkO-vl0U_t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEM3qctoQ1NB2E6RJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwr4KXbFjPketOxaNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAlAOKDiaBQK5hh-J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwafL6P-pK40mZxUcl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxVcajFKwg9PnMqbMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxsL6WeUUO8q2Lj9Eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxdpctoZsrh1a4ZX714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFrcuYxhJVzuGKstx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]