Would you trust an AI for a legal work?

in #life6 years ago

View the original post on Musing.io

I would not immediately trust AI for legal work. What makes AI powerful is how well it is programmed. The limit here, as in legal interpretation, is that programmer biases would enter into the code. So, you would be trading one kind of human error for another.

If it is a learning AI, it necessarily has to make mistakes in order to build the data set that would allow it higher accuracy. AI, like humans, learns from mistakes. They are incredibly efficient at making mistakes and learning from them much faster. However, this is little consolation for the person whose life is ruined because the AI does not have experience in the particulars of the work. Assuming the AI has exhausted all possible permutations of legal learning, it could be extremely reliable.

One thing AI would not help with are the whims of people involved. People enter into contracts, lawsuits, witness stands, and legal matters for all sorts of messy personal and business reasons. They settle out of court, drop lawsuits unexpectedly, enter into contracts under duress or while drinking, are intimidated against testifying, make up false testimonies, and all manner of human foibles.

So, the problem is still a human problem, not one of cold calculation. Would the AI question a person’s motivation? Would it come up with a creative solution allowing two parties to agree without going to court? Although on the surface, legal work appears to be one of logic and reason, there is use of nuance and personal judgement that goes into it as well. Prosecutors, for example, have discretion on charging people with a crime. There is room for negotiation and plea deals. Even in the case of iron clad contracts, they are always subject to renegotiation if both parties can agree.

To recap, any AI legal work would be limited and suspect at the outset. Over time, AI may learn as its data set grows. So, early legal AI would need to be supervised very closely and tutored incessantly. Once it has some degree of mastery, it may still need humans to capture those little nuances that often prevent strict legal interpretations.

Sort:  

Having been a trial lawyer for 38 years I totally agree, @shainemata. While AI is a powerful tool, when used correctly, it is not advanced enough to truly determine a legal outcome in advance. Juries are inherently unpredictable. The number of factors that go into any case evaluation do not lend themselves to just averaging past verdicts to get a value. While AI is helpful to gain some insight, we are not advanced enough to actually let AI call the shots in the legal arena. Give it 20 more years and who knows.

Another thing that occurs to me is how changeable law is. Each legislative session presents new laws and new interpretations. They are often revisited in a later session because of unintended consequences. These would be problematic for an AI.

Posted using Partiko iOS

Absolutely right. Memorizing the legal code is a waste of time. It changes every year.

such a great choice of topic @shainemata

If it is a learning AI, it necessarily has to make mistakes in order to build the data set that would allow it higher accuracy

not necessary. AI would have to be provided with huge data recorded interviews and would learn from it.

I personally dont think such a data exist. Most interviews are most likely not being recorder.

Great read,
yours
Piotr

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63330.55
ETH 2645.93
USDT 1.00
SBD 2.82