Mismatch between GCs and law firms over generative AI adoption, report finds

In-house counsel expect law firms to adopt cutting edge tech more than law firms think they do

There is a marked mismatch between the expectations of law firms and in-house counsel when it comes to the adoption of generative AI, new research by LexisNexis has found. 

The report – Generative AI and the future of the legal profession – found that almost three-quarters (70%) of in-house counsel expect their law firms to use cutting-edge technology, including generative AI tools, though only 55% of respondents from law firms and the Bar believed that their clients expected them to use such tools. 

Around half (49%) of in-house counsel surveyed expected their law firms to be using generative AI in the next 12 months, and of that percentage 11% expect firms to be already using the technology. Only 8% didn’t want AI used on their work. In contrast, 24% of firms believed their clients would not want them to use AI.

The report comes amid increased focus on generative AI in the legal profession following the launch of OpenAI’s chatbot ChatGPT last year. The platform reached 100 million users in February 2023 just three months after launching, a milestone it took TikTok around nine months and Instagram two years to achieve. 

Most (87%) of the 1,175 UK legal professionals surveyed were aware of generative AI tools – and of that group, nearly all (95%) agreed that such tools will impact legal practice. Almost four in 10 (38%) thought it will have a significant impact, while 11% said it will be transformative and 46% thought it would have “some impact”. 

Isabel Parker, partner of Deloitte Legal’s Transform to Operate service, said in the report that generative AI has the ability to disrupt the entire foundation of the legal market.

“This could lead to some very positive outcomes: the democratisation of legal advice, universal access to justice, market practice replacing two party negotiations, AI-based case resolution and productivity transformation for lawyers,” she said. 

While only 36% of respondents had used generative AI in a personal or professional capacity, the report found adoption rates are likely to accelerate in the coming months, with 39% saying they are currently exploring opportunities. This rose to 64% when analysing responses from large law firms alone, and to 47% when looking at responses from in-house lawyers.

Almost two-thirds of respondents (65%) agree that generative AI technology will increase their efficiency. When asked how they would like to use generative AI specifically in their work, respondents said researching matters (66%), briefing documents (59%) and document analysis (47%) had the most potential.

Ben Allgrove, partner and chief innovation officer at Baker McKenzie, said generative AI is different from some of the over-hyped tech developments seen in the past.

“It will change how we practise law,” he said. “One immediate area of focus is on how we might use it to improve the productivity of our people, both our lawyers and our business professionals. While there are, of course, quality and risk issues that need to be solved, we see opportunities across our business to do that.”

Many in the profession expressed concerns about the risks that come from the use of AI. Two thirds (67%) had mixed feelings about the impact of generative AI on the practice of law, agreeing that they see both positives and drawbacks. This was particularly true for respondents from large law firms, with 76% holding these mixed views.

One such issue is when generative AI tools like ChatGPT produce an output that is nonsensical or outright false – known as hallucination, which Parker noted is “a serious issue for a profession that prides itself on the accuracy of its outputs.” 

But she added there are ways to mitigate the risk, “such as through quality control by including a human in the loop.”

Kay Firth-Butterfield, executive director of the Centre for Trustworthy Technology, urged caution, saying these systems are only as good as the data on which they are trained. 

“All the concerns we have had in the past about whether we can design, develop and use AI responsibly are extended by generative systems where we simply cannot interrogate how they have reached a particular answer,” she said. “Generative AI tools can give biased and other non-ethical advice and should be used, especially at this early stage, very carefully indeed.”

Email your news and story ideas to: [email protected]