The Ethics of AI in The Legal Profession | Ipro Tech – JDSupra – JD Supra

Posted: July 18, 2021 at 5:43 pm

[author: Doug Austin, Editor of eDiscovery Today]

The final installment of the virtual Legalweek(year) just concluded, with the fifth installment having completed this week, after previous iterations in February (the traditional time of year for in-person Legalweek), March, April, and May. I attended a handful of sessions and planned to give you a sampling of the sessions I attended, but the session that I attended at the end of the conference on Wednesday was so good, I decided to cover it specifically.

The session was The Ethics of AI in The Legal Profession and it was conducted by Tess Blair of Morgan Lewis and Maura R. Grossman of the University of Waterloo and Maura Grossman Law (who should be a familiar name to any of you who understand Technology Assisted Review (TAR) as she and Gordon V. Cormack defined the term and issued the groundbreaking study that demonstrated how TAR could be more efficient and effective for document review).

Blair and Grossman covered several aspects of the use of AI, a couple of which I will briefly recap here. They also provided some interesting graphics to illustrate various concepts such as machine learning (interspersed pictures of chihuahuas and blueberry muffins so similar its startling), natural language processing (NLP) and deep learning.

Advising Clients Developing or Using AI

Among the topics covered here were the idea of how crowdsourcing can introduce bias into AI algorithms, where the example used was to type in the phrase lawyers are into Google search and the completion terms that dropped down included terms like scum, liars, sharks, evil, and crooks. Ouch!

There are three places where AI bias can come into play: 1) the data, the example used was to use only white faces to train an algorithm to the point it wont handle black faces; 2) the algorithm, which can be tuned to weight things differently; and 3) humans, which may have an algorithm aversion, may have automation bias (where the algorithm is assumed to be correct because it is not human) or may have confirmation bias (where they agree only if the results confirm what they already believe).

Grossman also discussed the use of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which tended to rate the possibility for black defendants to reoffend at much higher score than non-black defendants, as was the case of an 18 year old black woman who was rated high risk (a score of 8) for future crime after she and a friend took a kids bike and scooter that were sitting outside whereas a 41 year old white man who had already been convicted of armed robbery and attempted armed robbery was rated a low risk (according to this article by ProPublica).

Grossman pointed out that COMPAS experienced function creep where it was originally designed to provide insight into the types of treatment an offender might need (e.g., drug or mental health treatment), then expanded to decision making about conditions of release after arrest (e.g., release with no bail, bail, or retention without bail), before being expanded again to decisions about sentencing.

Blair added discussions regarding privacy and also AI moral dilemmas, such as the case of an autonomous vehicle, entering a tunnel with child in the middle of the road and a decision to make whether to go straight ahead and kill the child or veer off into the wall and kill the passenger (yikes!). Those and other moral dilemmas can be found here at MITs Moral Machine site.

Resources for Practicing with AI

With regard to practicing law while using AI, the presenters discussed several sources of guidance with regard to an attorneys duty for understanding AI, including:

Conclusion

Blair and Grossman concluded with a brief discussion on whether AI is going to take lawyer jobs (there may be fewer attorneys in the future, but they will be more focused on the types of tasks they were trained for in law school) and whether AI will ever become smarter than humans (Grossman stated that the technology often still cant do what a 3-year-old can do).

So, with great power comes great responsibility.

But lawyers (or anybody using AI) need to do their part to understand the technology and the concepts (as well as the risks) to fully benefit from AI. When they do, they can accomplish amazing things!

Next year, Legalweek returns to an in-person event in New York City from January 31 from February 3! See you there!

[View source.]

The rest is here:

The Ethics of AI in The Legal Profession | Ipro Tech - JDSupra - JD Supra

Related Posts