Independent law firms can stay competitive in race to adopt AI

Ethical AI policies allow law firms to harness AI's benefits while mitigating risks
European Parliament in Strasbourg, France

The EU has set out to lead the way in AI regulation; law firms can stay ahead by introducing ethical AI policies MDart10 / Shutterstock

Large international law firms have spearheaded the development of AI-powered tools in the legal sector, increasing pressure on firms of all sizes to invest in the technology.

Nearly half (49%) of in-house counsel surveyed by LexisNexis expect law firms to adopt generative AI within the next 12 months.

Just this week, Linklaters launched an AI sandbox to "quickly build out AI solutions, many stemming from ideas suggested by its people".

The legal tech market, meanwhile, has responded with a surge in the number of AI products.

Firms that ignore this growing client interest in AI risk being left behind. And yet many lawyers remain wary of the technology, complaining that it lacks transparency and can hallucinate, providing inaccurate information.

The stakes have been raised by the European Union's AI Act, which came into force in August, mandating high standards of transparency, accountability and security as well as effective education and training programmes.

However, regulation alone cannot account for the specific needs and risks associated with law firms, which can stay ahead of the game by implementing ethical AI policies.

And it need not be an overly complex or daunting process. Firms or legal departments could start by setting up a working group to identify the key principles, which are likely to emphasise the need to maintain mindfulness and responsibility, ensuring they use platforms that practice accountability, transparency or take steps to prevent bias.

A core feature of any such a policy is likely to focus on the inputting of data. The human oversight in AI systems at companies developing such systems starts with the inputting of data and appointing experts who carefully curate the data used to train systems and then constantly evaluate it, looking for any outliers or anomalies and correcting sources to maintain its quality.

It is just as important for outputs to be evaluated by lawyers who use those systems. Lawyers can be trained to pinpoint potentially flawed outputs and fact-check essential and high-risk information. They should then know how to report these insights to the systems themselves, improving future outputs and minimising future risk, thereby creating better models for everyone.

Effective internal policies, which must be regularly revisited and updated, can provide a mechanism for self-regulation, empowering firms to balance innovation with responsibility.

By thinking carefully about how they want to harness AI and ensuring their systems are closely supervised by their lawyers, independent law firms can benefit from the technology’s ability to improve the delivery of legal services, delivering value to clients.

The Global Legal Post has teamed up with LexisNexis to help inform readers' decision-making process in the selection of a Gen AI legal research solution.

Click here to download the LexisNexis Legal AI Solution Buyer's Guide and here to visit the Generative AI Legal Research Hub.

Email your news and story ideas to: [email protected]

Top