Is AI really here yet, have we missed the boat, are we struggling to keep up?

Law firms continue to wrestle with how to adopt AI and the potential impact on their businesses, write Paul Longhurst, Jenni Tellyn and Richard Kemp

Risks around AI are making firms cautious about fully embracing the technology Shutterstock

Paul Longhurst writes…

Against the backdrop of Rishi Sunak’s AI Safety Summit and the seemingly daily pronouncements introducing AI-enabled everything, let’s start by contrasting the current situation with our reaction to other significant technological shifts in the past 200 years.

Naysayers warned early train passengers of potential risks to the human body if propelled at 30, 40 or 50 miles per hour. When, later in the 19th century, cars liberated travellers from railway tracks, the British government required that they be preceded by someone waving a red flag. Jump forward a hundred years or so and Airbus introduced planes with three independently programmed computer systems whereby decisions had to be agreed upon by all of them and if one disagreed, the other two overruled it. These systems used code generated by computers raising concerns that people might not understand how they worked – although the safety record of Airbus has proved this approach, which has been evolved over time, to be spectacularly successful.

This is not to ignore the many accidents that pioneering travel has brought with it. However, it would be hard to imagine life without trains, planes or automobiles today or to ignore the impact that removing them would have on economies across the globe. My point here is that humans, often with good reason, are cautious of technological leaps and yet we have also learned to embrace these advances over time to the benefit of humankind. I believe that we will look back on AI in the same way but that, as of today in 2023, we should be concerned about how these tools are used until we are sufficiently aware of their impact and how best to control them.

To date, we have seen law firms working on innovative solutions while, at the same time, banning their lawyers from using ChatGPT and other generative AI tools for client-facing work and being wary about approaching clients for consent to do so for fear of opening cans of worms. Clients, on the other hand, have sometimes been more than happy to use the likes of ChatGPT before approaching their law firms to check that the generated opinion or document is legally sound. Whether this approach would in fact result in a cost or time saving is highly debatable but client demand may be pushing firms to seek to adapt, even where such firms aren’t yet fully embracing AI internally. The dynamic of how clients may require firms to flex their legal services pricing to reflect the use of time-saving AI tools will be interesting to monitor as everyone gets more savvy about what the tools can be deployed on and how much of a game changer they really are in practice. So what is the answer? 

Jenni Tellyn writes…

Let’s look at what we are actually seeing law firms doing and what we might reasonably anticipate them implementing soon.

Law firms are experimenting. Cautiously. Many have gone out to their businesses (after their managing partners got excited about whether AI will be transformative) to ask for ideas on use cases to explore. I’d imagine that over half the suggestions gleaned from businesses through these discovery exercises are for tools or functionality that already exist in the firm’s tech stack which people haven’t fully adopted or which would require tweaks to human-led processes rather than an AI tool to implement in order to drive an efficiency. Some of the use cases which we will look back on as transformative in years to come may not have been thought of yet as firms continue to experiment. And the experimentation extends beyond use cases into firms developing their own AI tools, either in-house or partnering with vendors, trained on their own content in order to facilitate users’ explorations of both capabilities and risks.

Given the innate conservatism of most lawyers, the risks of using AI tools are at the forefront of firms’ discussions and combined with a certain amount of cynicism. Risks include fears that an AI tool outside the firm will use sensitive data or that the tool will get something wrong which will be relied upon... embarrassingly. Scepticism abounds – having to fact check and source each statement that an AI tool (optimised for plausibility not accuracy) will come up with won’t in fact save a junior lawyer much time on the task they have set it. Added to this scepticism is the concern that we might deskill our lawyers and business services teams or strip them of their creative powers by overusing AI assistants to do their day-to-day work (like teenagers using ChatGPT to do their homework for them and compromising critical thought processes).

The risks around the content which the AI tools crawl in order to generate their responses are real. There are already copyright infringement cases which are emerging where websites have been scraped of copyrighted content without permission from the authors. And there are real concerns that businesses which seek to build their own models that are confined to scraping only their own internal content do not have enough volume of content in any practice areas to enable the tool to learn enough to be the powerful aid they would like it to be.

The golden rules firms seem to be adopting so far are first to think carefully about what goes in and out of the tools (what or whose data is being ingested and ensuring that the outputs are carefully vetted before they are used in anger). And secondly to treat the AI tool like an enthusiastic junior who needs close supervision. The use cases which are proving the most promising in trials are those where a more experienced lawyer already knows the answer they expect the tool to deliver (whether a summary of a long document or a precis of a meeting they actually attended) and can then use the tool to verify their thought process or to save a little time in pulling the draft together. Though whether the tool can detect sarcasm or irony might be a limitation for meeting summaries. Firms are very cautious about using the tools for legal research given the scope for catastrophe and time-wasting if it gets things wrong. That might be an area where firms leave the likes of the e-resources vendors to develop their own AI-enabled bolt-ons to their products and bear the potentially eye-watering attempts to increase subscription costs for these tools.

The extractive use cases are proving more fruitful rather than creative or generative use cases. So, for example, using the tools to quickly pull title information out of large numbers of leases into a report format in a real estate context, or using AI tools to generate ideas for thought leadership opportunities rather than to draft the articles, feels like safer territory for law firms. The way that large language models work is to predict what the next word should be in the output based on the training dataset the model has ingested. The potential for superficially plausible gibberish to be created by this mechanism is currently too great. Pure ‘creativity’ from AI tools makes lawyers nervous. And most clients don’t want a ‘creative’ Facility Agreement, they want a short and accurate one.

At base, we see firms looking at enhancing their services rather than replacing them so far, but it is early days in the evolution of the revolution. 

Richard Kemp writes…

Let’s now focus on how to manage the firm’s risk with clients.

For law businesses not at the commodity services end, the risks with generative AI are evolutionary not revolutionary, to coin Jenni’s phrase. At the moment it is cyber security, cloud and data protection that are keeping your insurer up at nights, not ChatGPT... yet. The US has always been the crucible for litigation of new tech, and generative AI is no different – for example, the current wave of copyright and IP infringement disputes around large language models based on wholesale scraping of the internet.

The narrative here at the moment is a familiar one:

  • make sure in the firm’s engagement terms that you have clear lines with the client to use AI in the first place;  
  • articulate clearly and transparently what is happening and who is doing what with client data – for other than ‘normal’ legal work, firms are increasingly using ‘statements of work’ like any other professional services provider to set out the detail;  
  • make sure the firm follows basic AI hygiene, such as avoiding bias and discrimination and ensuring reproducibility of results – whether AI is being used in beta or production, clients are likely to insist on this.  

So come along to our seminar on the 21 November (sign up here) where we will explore some of these themes in more detail and look at the risks and potential rewards for the profession at this hugely exciting time of development.  

Paul Longhurst is a director and Jenni Tellyn is a knowledge management consultant at 3Kites. Richard Kemp is a partner at Kemp IT Law. This is the 27th article in the series Navigating Legaltech

--------------------

About 3Kites and Kemp IT Law  
3Kites is an independent consultancy, which is to say that we have no ties or arrangements with any suppliers so that we can provide our clients with unfettered advice. We have been operating since 2006 and our consultants include former law firm partners (one a managing partner), a GC, two law firm IT directors and an owner of a practice management company. This blend of skills and experience puts us in a unique position when providing advice on IT strategy, fractional IT management, knowledge management, product selections, process review (including the legal process) and more besides. 3Kites often works closely with Kemp IT Law (KITL), a boutique law firm offering its clients advice on IT services and related areas such as GDPR. Where relevant (eg when discussing cloud computing in a future article) this column may include content from the team at KITL to provide readers with a broader perspective including any regulatory considerations.

Email your news and story ideas to: [email protected]

Top