Sign up for our free daily newsletter
YOUR PRIVACY - PLEASE READ CAREFULLY DATA PROTECTION STATEMENT
Below we explain how we will communicate with you. We set out how we use your data in our Privacy Policy.
Global City Media, and its associated brands will use the lawful basis of legitimate interests to use
the
contact details you have supplied to contact you regarding our publications, events, training,
reader
research, and other relevant information. We will always give you the option to opt out of our
marketing.
By clicking submit, you confirm that you understand and accept the Terms & Conditions and Privacy Policy
UK law firm Hill Dickinson has restricted its staff’s use of generative AI (Gen AI) by requiring them to get approval before using publicly available platforms like ChatGPT.
The move, first reported by the BBC, was outlined in an internal email which noted a “significant increase in usage” of Gen AI tools.
The firm, which employs more than 1,000 people across offices in the UK, Europe and Asia, detected around 32,000 hits to ChatGPT over a seven-day period in January and February.
During that time there were also more than 3,000 hits to the Chinese AI chatbot DeepSeek and nearly 50,000 hits to Grammarly, the writing assistance tool.
The email warned that much of the AI use it had detected was not in line with Hill Dickinson’s AI policy, according to the BBC.
Going forward, staff must be given approval from the firm to access publicly available Gen AI platforms. GLP understands the firm has received and approved requests for use since the email was circulated.
A Hill Dickinson spokesperson commented: “Like many law firms, we are aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients. AI can have many benefits for how we work, but we are mindful of the risks it carries and must ensure there is human oversight throughout.
“Last week, we sent an update to our colleagues regarding our AI policy, which was launched in September 2024. This policy does not discourage the use of AI, but simply ensures that our colleagues use such tools safely and responsibly – including having an approved case for using AI platforms, prohibiting the uploading of client information and validating the accuracy of responses provided by large language models.
“We are confident that, in line with this policy and the additional training and tools we are providing around AI, its usage will remain safe, secure and effective.”
Hill Dickinson’s decision to restrict access to AI platforms comes as the legal profession grapples with how to benefit from the technology while ameliorating its risks.
Research repeatedly finds AI’s potential to automate routine tasks and handle matters like legal research, document review and contract analysis could save lawyers several hours per week and generate significant new billable time – $100,000 per lawyer annually according to a recent Thomson Reuters report.
In a lecture delivered last week, the Master of the Rolls Sir Geoffrey Vos said lawyers and judges had “no real choice” about whether they embrace AI, but there were very good reasons why they should.
However, AI’s ability to ‘hallucinate’ false or misleading responses has landed lawyers in hot water for citing non-existent cases in submissions, while the potential for sensitive client information to be exposed to third parties is also often cited by legal professionals as a major concern.
Hill Dickinson is not aware that any client or internal files were uploaded during the period it monitored, according to a person with knowledge of the matter.
Furthermore, the website hits cited in the memo referred to individual prompts and is therefore unlikely to equate to the number of people using the tools, as people using them are likely to have made multiple enquiries in a session.
Stephen Almond, executive director, regulatory risk at the Information Commissioner’s Office – the UK’s data watchdog – underscored the need for businesses to find ways to use AI.
“With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar,” he said in a statement. “Instead, companies need to offer their staff AI tools that meet their organisational policies and data protection obligations.”
Jenni Tellyn, a consultant at law firm technology consultancy 3Kites, commented: "Despite the hyperbolic rhetoric around AI, it's evolution, not revolution. That being said, if the technology saves someone five minutes and you multiply that across 500 people in a law firm, it starts to become meaningful change.
"But for law firms to get the benefits of AI, they must train their people properly on how to use the technology and make sure they understand the firm's AI policy. They also need to make sure their policy and training is updated regularly, given how quickly the technology is developing."
Last December, the Solicitor’s Regulation Authority highlighted the need for law firms to properly train staff on its use. It stated: “Firms are accelerating in their adoption and use of generative artificial intelligence, particularly in larger firms and with a focus on back-office efficiencies.
“However, despite this increased interest in new technology, there remains a lack of digital skills across all sectors in the UK. This could present a risk for firms and consumers if legal practitioners do not fully understand the new technology that is implemented.”
Click here to download the LexisNexis Legal AI Solution Buyer’s Guide and here to visit the GLP Generative AI Legal Research Hub, in association with LexisNexis.
Email your news and story ideas to: [email protected]