The EU AI Act: What businesses and their legal advisers need to know

Businesses that use AI systems need to start preparing for how the new laws will impact their operations, writes Fladgate partner Tim Wright

Shutterstock

The new AI Act is a comprehensive piece of legislation that sets out rules for the development and use of artificial intelligence in the EU. Following the recent vote on the act’s compromise text in the European Parliament, the next stage is the trilogue, a type of negotiation which involves discussions between the EU Member States, the Parliament and the Commission aimed at reaching final agreement on the text of legislation. 

The trilogue can take several months, with the act expected to be adopted in 2024, with a grace period expected to be at least 18 months to allow businesses to get ready before the finally-agreed rules take effect. MEPs with primary responsibility for the act’s adoption are running a two-phased public consultation (the first phase closes at the end of June, the second at the end of July) so that affected companies can submit proposed revisions to the compromise text. This feedback will help steer the MEPs’ strategy in the trilogue, although time is short for businesses wishing to participate.

AI risk categories

The act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk and minimal/no risk. Unacceptable risk AI systems are banned outright, while high-risk AI systems are subject to strict requirements, compared to limited-risk and minimal/no-risk AI systems. This approach might prove problematic for general purpose AI – as the technology develops, we can expect to see more and more use cases emerge, making it difficult to classify a system at the outset. Other practical implications for businesses that develop or use AI systems include:

  • Conformity assessment and prior notification. Before placing their systems in the market, developers of high-risk AI systems will need to engage an independent organisation (a designated notified body) to conduct a conformity assessment confirming compliance with the rules. In certain sensitive areas such as healthcare and law enforcement, they will need notify national authorities to obtain prior approval.
  • Compliance. Businesses that develop or use AI systems will need to comply with the AIA’s requirements which are complex and difficult to navigate, but include ensuring that their AI systems are designed and used in a way that minimises the associated risk.
  • Performance monitoring. Businesses will need to monitor the performance of their AI systems to ensure that they are operating as intended, including monitoring systems for potential risks such as bias and discrimination.
  • Transparency and explainability. Businesses will need to provide transparency and explainability about their AI systems, including explaining how their AI systems work and how they make decisions.

Copyrighted material in training data

With the backdrop of a flurry of copyright infringement cases brought by rightsholders against developers of generative AI (a type of artificial intelligence that can create new content, such as text, images or music), the compromise text requires providers of generative AI models to disclose the source of any copyrighted material used in the training data. These disclosure requirements appear to be potentially far-reaching although the exact requirements are far from clear, especially since (unlike the US) the EU does not have a copyright registration system; also, copyright rules across the EU are not fully harmonised and can differ from member state to member state.

Implications for lawyers

New laws and regulations such as the EU’s AI Act create opportunities for legal advisers to work closely with their clients to help them plan and prepare for the new rules and then to advise boards and compliance, risk and development/operations teams, on an ongoing basis, including helping to develop policies, governance and training programmes. In addition, they may need to provide advice on anti-trust issues and draft and negotiate contracts that deal with the use of AI systems (with key issues such as ownership of data, the use of copyrighted material and liability). Litigators with deep knowledge of the regulations and the technology will also be needed – there have already been several copyright infringement cases reported, and other disputes can be expected in the fields of consumer harm (including bias and discrimination), product liability and public law.

Concluding thoughts

The EU AI Act is a complex piece of legislation with a range of practical implications for businesses and their lawyers. Businesses that develop or use AI systems need to start preparing and planning for the act, considering its requirements and their impact, and taking steps to build compliance, monitoring and governance programmes accordingly. Although businesses located outside the EU may not be subject to the exact same requirements as those in the EU, the so-called ‘Brussels Effect’ may well see regulators in other jurisdictions introducing copycat regulation with the AI Act becoming the de facto standard for a large part of the global market.

Tim Wright is a partner at Fladgate, covering technology, sourcing and commercial law.

Email your news and story ideas to: [email protected]

Top