Getting ahead: what lawyers need to know about AI and the law

While the extent of AI's powers is often exaggerated, lawyers who embrace it can benefit their careers in several ways, writes Dr Lance Eliot
AI Learning and Artificial Intelligence Concept. Business, modern technology, internet and networking concept.

Den Rise; Shutterstock

Artificial Intelligence keeps marching ahead and almost every day there are seemingly breath-taking announcements covering a new app that is AI-infused or a warning that AI is set to take over the world.

Let’s unpack this with a focus on AI and the law.

First, be aware that AI and the law is a veritable two-for-one combination. There is the facet of devising our laws to aid in governing AI, which you could say is the application of the law to AI. There is also the application of AI to the law as embodied in AI-powered legaltech systems that are used for aiding or performing legal tasks. The latter would encompass those Contract Lifecycle Management (CLM) software packages that have an AI feature for analysing contracts, as well as those e-Discovery tools that do AI-guided searches across a vast corpus of discovery materials.

Lawyers who are interested in the intersection of AI and the law can choose either or both of these aspects.

New laws

In the case of applying the law to AI, you could work with regulators to craft new laws or adjust existing laws to better cover the advent of AI. The trick here is that you ought to know something substantive about AI to participate in such a capacity. We are already witnessing a rush toward new AI-related laws that are at times overly broad due to a misunderstanding by lawmakers about what AI is, and likewise, the same type of miscomprehension has led to laws that are so narrow that most AI apps could claim to not be within the defined meaning of AI per the statutes being passed.

Another profession-boosting angle for lawyers entails providing legal services for prospective clients that get themselves into a bit of AI-related hot water, as soon there’s going to be quite a rise in legal cases aimed at companies developing AI and even firms that are simply employing AI systems.

You’ve perhaps seen headlines that AI systems are being devised and promulgated that have various embedded racial biases, gender biases and other inequities.

Algorithmic decision-making

Imagine a company that provides an online capability for consumers to apply for a mortgage loan. The company opts to make the online system ‘smarter’ by adding advanced AI that it is licensing from a startup vendor. Then it turns out that this algorithmic decision-making (ADM) veers toward approving loans for some based on their race or gender, having been steered in that direction by the AI functionality. You can bet your bottom dollar that there will be legal action brought not just against the startup vendor that devised the AI but also the company using it.

Clients will find themselves struggling to cope with the legal onslaught that is surely going to emerge as the pervasiveness of AI increases and similarly the legally borderline downsides of these AI systems are gradually revealed. The initial enthusiasm about 'AI for good', namely the use of AI to solve societal issues, has incrementally given way to the appearance of 'AI for bad'. There is outright 'AI for bad' that is produced by evildoers with adverse intent, but there is also under-the-radar foul AI that arises by developers and users unaware they’ve shaped or are using AI systems containing untoward underpinnings.

I’d like to also clear up something essential about today’s AI, which lawyers need to realise and take to heart: it is not sentient. Despite what you might read or hear, we aren’t even close to having AI that is. And nobody can say when we might reach that point, or if it will happen at all.

Absence of magic

The AI that we have currently is based on traditional computational capabilities. There isn’t any magic involved. I realise there is a great deal of heated legal debate about whether AI ought to be construed as a form of human-like agency, but this is more a futuristic perspective than reflective of the AI that we have right now. For those that want to blame AI for nefarious AI acts, trying to ascribe a legal duty to contemporary AI is outstretched. The people and firms that have built the AI programs or that have chosen to foster those AI apps are going to be facing legal wrath rather, than some AI sentient construct.

Shifting gears, the other career pursuit for some lawyers would be in aiding the integration of AI into legal practice software and systems (that’s considered the application of AI to the law). Some lawyers are realigning their focus within the legal profession and are aiming instead to work for legaltech vendors that are striving to add AI and want the deep legal knowledge of AI-savvy lawyers. There are also in-house attorneys choosing to work internally with their tech teams to infuse AI capabilities into existing proprietary apps already used in their law offices. A relatively new role of combining legal prowess with sufficient technological acumen, known as legal engineers, has also been gaining in popularity.

One way or another, the future of the law is going to involve AI and that is an inarguable fact for which you should be readying yourself accordingly.

Dr Lance Eliot is a Stanford University Fellow affiliated with the Stanford Center for Legal Informatics

Email your news and story ideas to: [email protected]

Top