‘Maintaining control and integrity’: Bar Council issues guidance for use of generative AI

Barristers warned that tech such as ChatGPT must be used responsibly to avoid ‘harsh and embarrassing consequences’

There is nothing inherently improper about using reliable AI tools for augmenting legal services, but they must be properly understood by the individual practitioner and used responsibly. 

So says new guidance published by the Bar Council on the use of ChatGPT or other generative AI software based on large language models (LLMs) by legal practitioners.  

The guidance – Considerations when using ChatGPT and generative artificial intelligence software based on large language models – emphasises the outputs of such models cannot be taken “on trust and certainly not at face value”. 

It emphasises that LLM software doesn’t analyse the content of data and does not think for itself. Rather, it is a “very sophisticated version of the sort of predictive text systems that people are familiar with from email and chat apps on smart phones, in which the algorithm predicts what the next word is likely to be”. 

Bar Council chair, Sam Townend KC, said: “The growth of AI tools in the legal sector is inevitable and, as the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity. Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy and compliance with applicable laws.”

 The guidance sets out the key risks with LLMs:  

1) Anthropomorphism – the technology is designed and marketed in such a way as to give the impression that the user is interacting with something that has human characteristics, when it fact it does not in any meaningful sense.  

2) Hallucinations – some outputs generated by these LLMs may sound plausible but are either factually incorrect or unrelated to the given context.  

3) Information disorder – ChatGPT can inadvertently generate misinformation, and not only because of the volume of misinformation that is likely to be in the training data.  

The guidance highlights the example of a New York lawyer who included six fictitious cases suggested by ChatGPT in his submissions, having asked the technology whether the cases were real and been told that they were and could be “found in reputable legal databases such as LexisNexis and Westlaw”. 

The guidance noted the lawyer’s mistake in thinking that the LLM was engaging in the human process of reading and understanding the question, searching for the correct answer and then communicating the correct answer. In fact, “all the LLM was doing was producing outputs (which just happened to be in the form of words) which its mathematical processes related to inputs (which also just happened to be in the form of words)”. 

4) Bias in data training – The fact that LLMs are trained on data that is trawled from the internet means that the technology will inevitably contain biases or perpetuate stereotypes or world views that are found in the training data. Although the developers of ChatGPT have attempted to put safeguards in place to address these issues, it is not yet clear how effective these safeguards are.  

5) Mistakes and confidential data training – ChatGPT and other LLMs use the inputs from users’ prompts to continue to develop and refine the system. As such, anything that a user types into the system is used to train the software and might find itself repeated verbatim in future results. This, the guidance notes, is “plainly problematic not just if the material typed into the system is incorrect, but also if it is confidential or subject to legal professional privilege”.  

The guidance also explores the considerations for practitioners when using LLM systems, including the need to verify the outputs and maintain proper procedures for checking generative outputs.  

LLMs should not be a substitute for the exercise of professional judgement, quality legal analysis and the expertise that clients, courts and society expect from barristers, it says. Barristers “should be extremely vigilant not to share with an LLM system any legally privileged or confidential information, or any personal data”, as the input information provided is likely to be used to generate future outputs and could therefore be publicly shared with other users.  

Practitioners should also critically assess whether content generated by LLMs might violate intellectual property rights and be careful not to use words which may breach trademarks, with the guidance noting that several IP claims against generative AI owners have already been lodged for allegedly unlawful copying and processing of copyright-protected works.  

Improper use of LLMs can lead to “harsh and embarrassing consequences”, it said, including claims for professional negligence, breach of contract, breach of confidence, defamation, data protection infringements, infringement of IP rights and damage to reputation; as well as breaches of professional rules and duties.  

It warned that it is important to keep abreast of relevant Civil Procedure Rules, which in the future might implement rules and practice directions on the use of LLMs, for example, requiring parties to disclose when they have used generative AI in the preparation of materials, as has been adopted by the Court of the King’s Bench in Manitoba.  

Townend said the guidance would be kept under review and warned practitioners “to be vigilant and adapt as the legal and regulatory landscape changes”. 

Email your news and story ideas to: [email protected]