I Robot: the rise of AI, machine learning and ChatGPT in legal

3Kites' Paul Longhurst and Kemp IT Law's Richard Kemp discuss the potential for AI in the legal profession and how the technology is likely to impact law firms
AI Learning and Artificial Intelligence Concept. Business, modern technology, internet and networking concept.

AI may bring benefits, but it is no substitute for experienced lawyering, says Paul Longhurst Den Rise; Shutterstock

Paul Longhurst writes…
 
Following on from article 17 in this 3Kites' Navigating Legaltech series, I wanted to look at the impact of AI on legal work and the potential challenges that it introduces for law firms. Let me start by saying that, at this point in time, AI (or artificial intelligence) is still no substitute for experienced lawyering when trying to find the best outcome for your client in a complex matter. When a lawyer processes a matter, years of experience are engaged (often without conscious control) to help provide balanced judgements that feed into the advice given. The advice can be highly nuanced and is rarely binary – in extremely complex situations, it may be right in all the key areas but is probably not 100% correct all of the time.
 
Where AI tools can help, and quickly, is with reviewing large swathes of information to establish patterns, find connections or themes, identify which content relates to what and so on… but none of this is innate. AI tools may be specific to a certain task (such as cameras in your car tuned to read lighting conditions) or require ‘training’ (via feeding it documents that exemplify a worktype, such as employment contracts), or both. Like humans, the tools are not 100% correct all of the time, especially when dealing with complex themes – ChatGPT warns users that it "sometimes writes plausible-sounding but incorrect or nonsensical answers". 

It would be a tough sell to offer clients advice that seems plausible but which may be incorrect. It is worth noting here that ChatGPT (and some other AI tools) won't give any indication of the provenance for content it generates, making it difficult to check for accuracy without going back and doing much of the work yourself. However, where it could be useful is in creating a summary of a situation or concept where lawyers know the correct answer – and can therefore confirm the accuracy of what is produced – but haven't got the time to produce a neat summary.  
 
To dispel AI out of hand is therefore to ignore the benefits that advances in technology can bring. The trick is knowing when to use tools and when to apply our brains. Due diligence is the lengthy process of scanning documents to find commitments made, contracts that roll-over and much more besides. AI tools (or, more properly in this context, machine learning tools) are getting to be very good at this type of thing – once correctly set-up, they can be quick, cost effective and unerringly consistent with applying any rules that have been programmed in and able to flag any exceptions that don’t fit these rules. But contrast this with listening to the client, interpreting their requirements, probing to understand something that doesn’t quite stack up… that takes years of experience rather than systematic programming.
 
Combining the benefits of both must be the way forward, right, so why aren’t we all doing it? Firstly, much of the technology is still in its infancy. Some of the larger firms are already investing time and money to understand what can be achieved with leading-edge tools like Generative AI (where the tool generates new content) alongside identifying the pitfalls to avoid. 

However, this is not something that many law firms will have the funds or the ambition to tackle while we are still in the pioneering phase. LEO (Lyons Electronic Office) was the world’s first commercially available computer built in 1951 to run payroll and inventory for Lyons Corner Shops and gave rise to the British computer industry that saw its zenith with ICL before Fujitsu hoovered that up in 1990. This was about 12 years longer than Lyons lasted. Pioneers don’t always enjoy the fruits of their labours, so it may be useful to watch from the sidelines while they push ahead without the luxury of learning from others.  
 
That said, I think it is a valid strategy for smaller firms to take advantage of proven technology where the path has already been established. Tools providing functionality such as document interrogation, automated time capture, case management decisions based on volume-data analysis and due diligence (as noted above) are steadily gaining ground. If your firm has a vision which these tools can help to deliver, then add it to the project list and accept that innovation is rarely perfect the first time but that you will undoubtedly learn something and be better with phase two. 
 
Some of the learning here may be expensive or have other repercussions if firms move ahead without fully understanding the regulatory risks lurking round the corner… and here is where Richard Kemp can pick up the story.


Read the Law Over Boarders comparative law guide to AI



Richard Kemp writes…
 
Lots to think about here. Generative AI has grabbed the headlines in the legal world this year in a way that no other lawtech really has. The top question for firms’ management is: ‘How will these AI tools benefit the business and how do we manage the risks?’ 
 
As with other lawtech, use of AI in legal services is evolutionary not revolutionary. For firms it is a question of focusing on the practical and what really matters. The key areas are regulatory, contractual and IP. GDPR in the AI space is likely to be the most vexing AI question for UK law firms. Here, key risks include:

  • establishing a lawful basis for training the underlying model;
  • data subject expectations and fairness, particularly around privacy notice requirements, developing mechanisms to deal with data subject requests, and ‘human in the loop’ Article 22 issues (individual decision-making and profiling);
  • inferences and ‘hallucinations’ (the AI getting it wildly wrong) as personal data;
  • data breach and security risks; and the real world point that AI providers are likely to entertain little or no change to their standard or enterprise terms of service (ToS), which may not have been prepared with GDPR front of mind.

From a regulatory perspective, aside from GDPR, the SRA’s firm code of conduct is relatively benign on firms’ use of technology and this is the case with AI also, certainly at the moment. Where the firm is working with a client in a heavily regulated sector like financial services or healthcare, the firm itself won’t be directly regulated in that sector but will need to comply with relevant engagement commitments where the client has flowed down a wide range of IT-hygiene related policies and terms to all its law firms. We are starting to see banks and other large clients seeking to impose specific terms around AI, to add to the set of IT and security terms that have grown very rapidly in recent years.
 
Contractually, AI providers’ ToS that we’re seeing at the moment are developing at pace, but remain one-sided, typically (for example) with broad uncapped indemnities in favour of the provider. If the firm can move off the provider’s standard ToS to an enterprise agreement there may be a bit more room for negotiation, but it is likely to remain a question of ‘you want our service, you sign our terms’. The firm should back off these risks as much as it can in its client-facing engagement terms.
 
Looking at the firm’s client agreements, can the firm effectively back off to its clients the most onerous terms of the AI provider? Does this need a change to current client terms?  Will any AI-related services the firm is providing need a change or an addendum to its engagement terms – perhaps a specific statement of work or KPIs for a particular project? With banks, insurers and other enterprise clients, is a firm’s use of AI consistent with all aspects of the firm’s commitments in its agreements with those clients? 
 
IP-wise, in the wake of ChatGPT and similar Generative AI, we are starting to see a wave of disputes around claimed misuse of IP on input and output content – Getty Images suing Stability AI for copyright infringement and AWS (Amazon Web Services) prohibiting its people from sending AWS code to Generative AIs, are cases in point. Firms should consider protecting their own AI-related IP through express terms in their engagement arrangements.

Paul Longhurst is a director of 3Kites and Richard Kemp is a partner at Kemp IT Law. This is the 18th article in the series Navigating Legaltech

--------------------

About 3Kites and Kemp IT Law
3Kites is an independent consultancy, which is to say that we have no ties or arrangements with any suppliers so that we can provide our clients with unfettered advice. We have been operating since 2006 and our consultants include former law firm partners (one a managing partner), a GC, two law firm IT Directors and an owner of a practice management company. This blend of skills and experience puts us in a unique position when providing advice on IT strategy, fractional IT management, knowledge management, product selections, process review (including the legal process) and more besides. 3Kites often works closely with Kemp IT Law (KITL), a boutique law firm offering its clients advice on IT services and related areas such as GDPR. Where relevant (eg when discussing cloud computing in a future article) this column may include content from the team at KITL to provide readers with a broader perspective including any regulatory considerations.

Email your news and story ideas to: news@globallegalpost.com

Top