Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms – Forbes

Sergey Tarasov - stock.adobe.com

Artificial intelligence (AI) has the potential to help every sector of the economy. There is a challenge, though, in sectors that have fuzzier analysis and the potential to train with data that can continue human biases. A couple of years ago, I described the problem with bias in an article about machine learning (ML) applied to criminal recidivism. Its worth revisiting the sector as time have changed in how bias is addressed. One way is to look at sectors in the legal profession where bias is a much smaller factor.

Tax law has a lot more explicit rules than, for instance, do many criminal laws. As much as there have been issues with ML applied to human resource systems (Amazons canceled HR system), employment law is another area where states and nations have created explicit rules. The key in choosing the right legal area. What seems to be the focus, according to conversations with people at Blue J Legal, is the to focus on areas with strong rules as opposed to standards. The former provide the ability to have clear feature engineering while that later dont have the specificity to train an accurate model.

Blue J Legal arose from a University of Toronto course started by the founders, combining legal and computer science skills to try to predict cases. The challenge was, as it has always been in software, to understand the features of the data set in the detail needed to properly analyze the problem. As mentioned, the choice of the tax system was picked for the first focus. Tax law has a significant set of rules that can be designed. The data can then be appropriately labeled. After their early work on tax, they moved to employment.

The products are aimed at lawyers who are evaluating their cases. The goal is to provide the attorneys statistical analysis about the strength and weaknesses of each case.

It is important to note that employment is a category of legal issues. Each issue must be looked at separately, and each issue has its own set of features. For instance, in todays gig economy, Is the worker a contractor or an employee? is a single issue. The Blue J Legal team mentioned that they found between twenty and seventy features for each issue theyve addressed.

That makes clear that feature engineering is a larger challenge than is the training of the ML system. That has been mentioned by many people but still too many folks have focused on the inference engine because its cool. Turning data into information is a more critical part of the ML challenge.

Once the system is trained, the next challenge is to get the lawyers to provide the right information in order to analyze their current cases. They must enter (or their clerks must enter) information about each case that match the features to be analyzed.

On a slightly technical note, their model uses decision trees. They did try the Random Forest model, of interest in other fields, but found their accuracy dropped.

Blue J Legal claims their early version provides 80-90% accuracy.

By removing variables that can drive bias, such as male v female, they are able to train a more general system. Thats good from a pure law point of view, but unlike the parole system mentioned above, that could cause problems in a lawyers analysis of a problem. For instance, if a minority candidate is treated more poorly in the legal system, a lawyer should know about that. The Blue J Legal team says they did look at bias, both in their Canadian and USA legal data, but state that the two areas they are addressing dont see bias that would change the results in a significant way.

One area of bias theyve also ignored is that of judges, for the same reason as above. Im sure its also ignored for marketing reasons. As they move to legal areas with fewer rules and more standards, I could see a strong value for lawyers in knowing if the judge to whom the case has been assigned has strong biases based on features of the case or the plaintiff. Still, if they analyzed the judges, I could see other bias being added as judges might be biased against lawyers using the system. Its an interesting conundrum that will have to be addressed in the future.

There is a clear ethical challenge in front of lawyers that exists regardless of bias. For instance, if the system comes back and tells the lawyer that 70% of cases that are similar go against the plaintiff, should the lawyer take the case? Law is a fluid profession with many cases being similar but not identical. How does the lawyer decide if the specific client is in the 70% or the 30%? How can a system provide information help a lawyer decide to take a case with lower probability or reject one with a higher probability? The hope is, as with any other profession, that the lawyer would carefully evaluate the results. However, as in all industries, busy people take shortcuts and far too many people have taken the old acronym of GIGO to no longer mean garbage in, garbage out, but rather garbage in, gospel out.

One way to help is to provide a legal memo. The Blue J Legal system provides a list of lawyer provided answers and similar cases for each answer. Not being a lawyer, I cant tell how well that has been done, but it is a critical part of the system. Just as too many developers focus on the engine rather than feature engineering, they focus on the engine while minimizing the need to explain the engine. In all areas where machine learning is applied, but especially in professions, black box systems cant be trusted. Analysis must be supported in order for lawyers to understand and evaluate how the generic decision impacts their specific cases.

Law is an interesting avenue in which to test the integration between AI and people. Automation wont be replacing the lawyer any time soon, but as AI evolves it will be able to increasingly assist the people in the industry, to become more educated about their options and to use their time more efficiently. Its the balance between the two that will be interesting to watch.

See original here:
Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms - Forbes

Related Posts
This entry was posted in $1$s. Bookmark the permalink.