Arbitration survey highlights concern over use of AI for legal texts and adjudications

Report highlights growing awareness of AI’s benefits, but flags need for regulation and disclosure
AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.

Shutterstock: Suri_Studio

Most members of the international arbitration community are opposed to the use of artificial intelligence (AI) for adjudications and in legal texts while acknowledging its usefulness for certain tasks including document review and production, according to new research.

These are the headline findings of Bryan Cave Leighton Paisner’s (BCLP’s) 2023 Annual Arbitration Survey, which focuses on the growing use of AI in international arbitration and is based on a survey of 221 respondents.

Notably, a vast majority of the respondents (86-88%) expressed concerns over critical issues like AI’s impact on cybersecurity, AI-generated fallacies, the potential for breaches of confidentiality, and the danger of AI tools being misused to create deepfakes or tamper with evidence.

Despite these concerns, 37% of the respondents had used the technology for translating documents while 30% had deployed it for document review and production and text formatting and editing. Nearly one-in-four respondents (24%) had used it for document analysis, defined as “extracting and organising data from documents”.

Furthermore, a large majority of respondents expressed no objection to using AI for such purposes although 62% “draw the line” at its use for the generation of texts for use in arbitral awards with 53% taking the same view in relation to legal argument/legal submissions.

Confidence in arbitrators’ technical capability to manage AI tools was correspondingly low, with 79% of the respondents rating it at five or below out of 10.

Claire Morel de Westgaver, an arbitrator and partner at BCLP, said: “There is a growing expectation that arbitrators should be able to identify and navigate any risks associated with using AI tools in international arbitration. Given the speed with which AI technology develops, arbitrators will undoubtedly require more advanced training and assistance concerning AI technology and its implications for arbitration.”

Views on the need for the use of AI to be disclosed depended on the task: 72% said it should be disclosed when used for drafting expert reports, 65% felt it was necessary for document review and production and 62% believed disclosure was needed when AI was used for translating documents. Despite calls for transparency, however, only 50% of respondents believed that the use of AI tools in arbitration should be openly disclosed to all involved parties.

Oliver McClintock, chief commercial officer of Opus 2, said: “The report illustrates both the increasing demand for AI and the challenges technology providers must be able to navigate to provide a beneficial application of emerging AI thoughtfully. Ultimately, the successful use of AI will depend on solutions designed specifically for the nuance of arbitration and humans using it who understand its power and limitations.”

A majority (63%) of the respondents advocated for regulatory measures on the use of AI in international arbitration. However, there was less enthusiasm for embedding such regulation within arbitration rules, with only 26% in favour of this approach. That suggests institutions are seen as less able to deal with the challenge of regulating AI than other bodies.

The use of AI was a key theme at this month’s IBA's Annual Conference in Paris, when president Almudena Arpón de Mendívil Aldama said it had been consistently identified as “the most important issue, in terms of substantive law developments, the challenges posed to the legal profession and society as a whole”. 

In March, the Council of Bars and Law Societies of Europe issued guidance on the use of AI-based tools by EU lawyers, highlighting the risks to professional obligations. In August, the American Bar Association announced the creation of a task force to examine the impact of AI on practice and the ethical implications for lawyers. 

Some 54% of the survey respondents were lawyers in law firms; 33% were arbitrators; and 12% were in-house counsel.

Email your news and story ideas to: [email protected]