One-in-five large law firms issue warnings over use of generative AI or ChatGPT, survey finds

Half of legal professionals agree generative AI should be used for legal work, despite concerns over accuracy, privacy, confidentiality and security
Portland, OR, USA - Dec 18, 2022: Webpage of ChatGPT, a prototype AI chatbot, is seen on the website of OpenAI, on a smartphone. Examples, capabilities, and limitations are shown before a new chat.

Tada Images; Shutterstock

Awareness of generative AI tools among legal professionals is high but only around half believe it should be used for legal work amid serious concerns around accuracy and privacy, according to a new report. 

Some 15% of respondents to a survey conducted by Thomson Reuters, said their firms had issued a warning around use of generative AI or ChatGPT at work, rising to 21% of those at large law firms, while 6% said their firms had banned unauthorised usage outright. 

While most respondents (82%) to the poll of more than 400 legal professionals at law firms in the US, UK and Canada agreed that such tools could be applied to legal work, that percentage fell to just over half when respondents were asked whether they should be applied. The remaining respondents were roughly split between those who did not believe it should be applied to legal work (24%) and those who did not know either way (25%).

Jason Adaska, director of the Innovation Lab at US law firm Holland & Hart, said in the report he is finding a similar shift in attitudes among those at his firm. 

“The biggest delta I’ve seen is just people understanding that the realm of the possible has sort of shifted in a monumental way,” he said, adding that this means generative AI education has become critical. 

Partners and managing partners at law firms generally felt more positive than other types of attorneys that generative AI or ChatGPT should be applied to legal work, with 59% in this category agreeing that it should, compared with 52% of associates and 44% of other lawyers within firms.

Arsen Shirokov, director of information governance and security at Canadian law firm McMillan, noted in the report that many attorneys – and especially partners – are interested in ChatGPT and generative AI not only because of the tools’ technological capabilities but for their potential for commodifying low-value work. 

“[Lawyers] are not typically excited about changing their ways or disrupting the industry that way, but I think lawyers ultimately do see this as an opportunity to actually positively change their business, especially partnership,” he said. “Partners understand the business model a little bit more.”

The outlook was more positive for law firms’ non-legal work, with 72% of respondents agreeing that generative AI or ChatGPT should be applied for such work and just 7% saying that they shouldn’t. 

Currently use of generative AI or ChatGPT for law firm operations appears to be rare, with just 3% of respondents saying it is currently being used at their firm and about one-third of respondents considering its use; meantime six out of 10 respondents said their firm had no current plans for generative AI in their operations. 

Knowledge and business operations were paramount among those who responded that they were using or planning to use ChatGPT or generative AI at their firms. More than half of those respondents cited knowledge management and back office functions as possible use cases, while brief and memo drafting, contract drafting, and question and answering services were cited by more than one-third of those respondents.

The report also noted that generative AI could present a uniquely powerful proposition for mid-size firms. 

Jessica Lispon, co-chair of the technology, data and IP department at New York law firm Morrison Cohen, said in the report that generative AI’s potential for dramatically reducing the time doing repetitive tasks means mid-size firms may eye generative AI differently than other artificial intelligence-powered technologies. 

“It’s not the lack of knowledge or skill, it’s not the quality of our lawyers that’s holding us back, it’s just the volume of them,” Lipson said. “Once you take away that staffing limitation, which I think could happen – and it’s not going to happen tomorrow, but in a number of years, once we have fully implemented technologies like this – I think it could really help us go head-to-head with other firms. The size of the firm will be less important than who is in that firm.” 

Perhaps the main reason for the disparity between the potential utility of generative AI in law firms and its adoption are concerns legal professionals had about its potential risks, which fell into four main categories – accuracy, privacy, confidentiality and security. 

A full 62% of respondents said their law firm had concerns around generative AI’s use at work, while an additional 36% said they did not know how their firm views its risk. Just 2% said their firms had no concerns.  

Accuracy was a common concern among respondents – one of whom noted that AI might not spot an error in legal work that a human lawyer would, which could raise ethical concerns around acting in the best interest of the client. 

“The more lawyers rely on third-party AI for research, writing, etc., the less of the lawyer’s true skill set is involved,” another respondent said. “Lawyers may become essentially ‘book reviewers’ rather than authors. Yet they and their firms are personally and corporately liable for errors and omissions. Raises insurance, malpractice and other issues.”

Another common concern among respondents centred on the data needed for the system to function, particularly if that included private client data. One noted concern about the confidentiality of source material used to generate AI output, while another took issue with how the data would ultimately be used, citing the need to ensure adequate guardrails, such that the AI is not learning incorrect or inappropriate behaviours. 

The report highlighted that all those interviewed mentioned the need to apply guardrails to generative AI and that they did not fully trust generative AI tools with confidential data. 

The report concluded that the legal industry, like many others, will be greatly impacted by the evolution of generative AI and public-use models like ChatGPT. 

‘As our research shows, even as actual use in the legal industry may be rare, attitudes are changing, and potential use cases are being explored,’ it said, adding that a day will come when generative AI and ChatGPT is as common in law as online legal research and electronic contract signing is now. 

The report – ChatGPT and Generative AI within Law Firms – was based on the responses of 443 legal professionals from large and mid-sized firms, most of which (63%) were based in the US and the rest split between the UK and Canada.  

The respondents’ job titles were roughly split between partners/managing partners (34%), associates (30%), and other lawyers (26%). The remaining 11% were split between paralegals, law librarians, C-suite/executive leadership and IT/technology management.

Email your news and story ideas to: [email protected]

Top