‘Early adopters of Gen AI research tools have reported measurable productivity gains’

Mathieu Balzarini, LexisNexis’s vice president of product in the CEMEA region, says interest in Gen AI legal research tools ‘is both high and growing rapidly’
Formal headshot of Mathieu_Balzarini

Mathieu Balzarini

In this Q&A, Mathieu Balzarini, vice president of product for Continental Europe, Middle-East and Africa (CEMEA) at LexisNexis, demystifies the technology behind generative AI (Gen AI) legal research products and predicts rapid advancements in its capabilities over the next 12 months. 

How would you characterise the level of interest among CEMEA-based lawyers in Gen AI legal research products?

Interest in Gen AI legal research products among CEMEA-based law firms and legal departments is both high and growing rapidly. This enthusiasm is driven by increasing recognition of the efficiency gains, cost reductions and enhanced accuracy these technologies can bring to legal practices. Law firms are keen to leverage Gen AI tools like Lexis+ AI for tasks ranging from research and drafting to case summarisation and document review. 

The region’s legal professionals are acutely aware of the competitive edge that such tools offer, especially in a dynamic and often resource-constrained environment. Early adopters in the CEMEA region have reported measurable productivity gains, with reduced research times allowing professionals to focus on higher-value tasks such as strategic counsel and complex legal analysis. Furthermore, the strong regulatory focus in Europe amplifies interest in AI tools that are built with security, ethical considerations and compliance in mind – qualities central to Lexis+ AI.

Large Language Models (LLMs) lie at the heart of Gen AI products. Why are they so revolutionary?

LLMs represent a paradigm shift in AI technology due to their unprecedented ability to process and generate human-like text across a broad range of topics. In the legal domain, LLMs have transformed tasks such as drafting documents, summarising complex cases, and conducting nuanced legal research. What makes LLMs particularly revolutionary is their ability to understand context, interpret language with precision and learn from feedback. 

In legal practice, where every word matters and accuracy is paramount, LLMs can analyse complex legal jargon, identify relevant precedents and even suggest solutions that align with a user’s query. They bring a level of efficiency and depth to legal tasks that was previously unattainable, effectively acting as an advanced assistant capable of handling the repetitive, time-intensive aspects of legal work.

While there is considerable excitement about this technology, there are also concerns about so-called hallucinations, when inaccurate information is provided. Are these concerns justified?

The concerns about hallucinations in Gen AI are indeed justified, particularly in a field like law where precision and factual accuracy are critical. Hallucinations occur when AI generates content that is plausible but incorrect, often due to gaps or biases in its training data. In legal contexts, such errors can have serious implications, ranging from misinterpretation of laws to reliance on non-existent precedents. 

LexisNexis recognises this challenge and has implemented measures to minimise such risks. The use of Retrieval-Augmented Generation (RAG) in Lexis+ AI is a key innovation in this regard. By limiting the AI’s input sources to a proprietary and authoritative repository of LexisNexis legal content, the system significantly reduces the potential for hallucinations. This approach not only enhances accuracy but also ensures transparency, as users can trace AI-generated responses back to their original sources. While no AI system can guarantee 100% error-free outputs, the integration of human oversight, rigorous data curation and advanced retrieval techniques collectively address this concern, providing a reliable solution for legal professionals.

LexisNexis adopts a ‘closed universe’ approach to its Gen AI technology. How does this work?

The ‘closed universe’ approach employed by LexisNexis involves restricting the scope of the AI’s data to a controlled, curated set of sources. This approach is underpinned by RAG technology, which allows the AI to fetch information from a specific knowledge base – in this case, LexisNexis’s proprietary repository of legal documents, case law, statutes and authoritative secondary sources. 

Here’s how it works in practice: when a user inputs a query, the AI doesn’t rely on the vast and unverified data available on the open web. Instead, it searches the curated legal database, retrieves the most relevant documents and generates a response based on this reliable input. This methodology ensures that responses are accurate, sourced from trusted materials and free from the ‘black box’ effect often associated with AI. The closed universe approach is particularly suited to the legal profession, where the stakes for accuracy are high, and reliability is non-negotiable. This design aligns perfectly with the needs of legal practitioners who demand precision and verifiability in their research tools.

How do humans fit into the equation?

Humans remain central to the development, oversight, and application of AI in the legal field. At LexisNexis, human expertise plays a critical role throughout the lifecycle of AI systems. From the initial curation of training data to the ongoing evaluation and refinement of AI models, legal professionals ensure the system’s relevance and accuracy.

Training programmes are essential to equip legal professionals with the skills needed to interact effectively with AI tools, identify potential inaccuracies and provide feedback for continuous improvement. This ensures that legal practitioners can fully leverage the capabilities of AI to enhance their workflows while maintaining accountability and high professional standards. By combining human judgement with AI efficiency, LexisNexis ensures its solutions uphold the integrity and reliability critical to legal work.

How do you see this technology developing in the next 12 months?

The next 12 months are likely to witness rapid advancements in Gen AI for the legal industry. Key developments will include greater integration of Gen AI tools into existing legal workflows, enabling seamless transitions between AI-assisted research, drafting and client communication. The upcoming iteration of Lexis+ AI, named Protégé, exemplifies this trajectory, promising personalised, context-aware assistance that leverages both proprietary content and user-specific data. 

We can also expect improvements in user experience, with AI systems becoming more intuitive and adaptable to individual preferences. On the technical front, advances in transparency and explainability will address lingering concerns about the ‘black box’ nature of AI, fostering greater trust among users. Regulatory compliance will also shape technological innovation, as developers adapt to meet the requirements of frameworks like the European AI Act. These developments will collectively enhance the reliability, security and functionality of Gen AI tools, solidifying their role as indispensable assets in legal practice.

The EU has put itself at the forefront of AI regulation globally. How will this impact the development of Gen AI legal research products?

The EU’s leadership in AI regulation, exemplified by the AI Act, will play a pivotal role in shaping the development of Gen AI legal research products. The act’s emphasis on transparency, accountability and data security ensures that AI systems meet high ethical and operational standards, particularly in high-risk applications like legal research. 

For developers like LexisNexis, this regulatory framework aligns perfectly with existing commitments to responsible AI innovation. The AI Act encourages practices such as ensuring data integrity, implementing robust privacy protections and fostering explainability in AI outputs – all of which are foundational to LexisNexis products. By establishing clear rules and guidelines, the AI Act not only fosters user trust but also creates a predictable environment for innovation. LexisNexis is well-positioned to lead in this space, delivering Gen AI tools that are secure, compliant and tailored to the evolving needs of legal professionals while upholding the rigorous standards demanded by the EU.

The Global Legal Post has teamed up with LexisNexis to help inform readers’ decision-making process in the selection of a Gen AI legal research solution.

Click here to download the LexisNexis Legal AI Solution Buyer’s Guide and here to visit the Generative AI Legal Research Hub.

Email your news and story ideas to: [email protected]

Top