Encouragement and caution: USPTO’s guidance on AI-assisted patent drafting

The USPTO is leaning on old rules to help govern AI-related patent filings, write Holland & Hart partners Philip Harris and Nathan Mutter

The guidance didn’t introduce any new rules for AI-assisted patents Shutterstock

Artificial intelligence (AI) tools, including generative AI (GenAI) and others, have exploded in popularity, empowering users to quickly generate volumes of content with minimal inputs, oversight or technical expertise. The advantages of these tools, including the potential to reduce costs, refine outputs and increase quality, are recognised across the legal industry, including in the preparation and prosecution of patent applications before the United States Patent and Trademark Office (USPTO), among other venues. 

The widespread investigation and development of AI tools has in some cases outpaced the rate of meaningful regulations or directive on how to apply this revolutionary technology to existing legal frameworks, including the patent arena. In an effort to both encourage AI adoption and development and caution stakeholders of the risks, the USPTO published guidance on the use of AI-assisted tools in proceedings before the USPTO, effective 11 April 2024. Here, we briefly summarise the guidance and provide takeaways on some of the key areas. 

Summary of guidance

Notably, the new guidance did not introduce new rules or regulations for AI-assisted patent applications. Instead, it explains how existing rules – including the duty of disclosure under the duty of candour and good faith, the signature requirement, client confidentiality obligations, foreign filing regulations and rules for accessing the USPTO’s electronic filing system – will be interpreted to cover novel issues arising from the use of GenAI.

Duty of candour and good faith

The guidance explains that practitioners are not required to disclose the use of AI tools to the USPTO unless the use is considered “material to patentability”. When a claim is generated by AI without “significant human contribution”, the guidance also indicates “if an AI system is used to draft patent claims that are submitted for examination, but an [inventor or practitioner] has knowledge that one or more of the claims did not have a significant contribution by a human inventor, that information must be disclosed to the USPTO” under the duty of candour and good faith. The guidance also references earlier USPTO guidance from February 2024 related to “human inventorship” and what constitutes a “significant contribution” by a human, referencing the Pannu factors, among other case law.

Reasonable enquiry obligation – signature requirement

The guidance reminds practitioners that by signing a submission, the individual certifies all statements (to the individual’s own knowledge) are true and the individual performed a reasonable enquiry under the circumstances. In the context of GenAI, the guidance indicates this requires the individual to review and verify the contents of the submission – including facts, arguments and authorities, along with avoiding general errors or omissions. A practitioner cannot, according to the guidance, simply rely on the accuracy of the AI tool to satisfy the ‘reasonable enquiry’ obligation under the signature requirement.

Data and confidentiality

The guidance touches on some of the underlying technical facets of using an AI system to prepare patent documents, such as data storage and handling, and how to apply the existing rules related to client confidentiality, foreign filing licenses and export controls. Specifically, the guidance reminds practitioners that inputting confidential information into an AI system may publicly expose the data or inadvertently export the data to servers outside of the US in violation of export control or foreign filing license rules. 

Practitioners are reminded to verify that any AI system used complies with the data security and confidentiality requirements imposed by existing laws and, of course, their clients. Addressing the use of AI-assistance and/or other AI systems that integrate with the electronic filing systems, the guidance cautions that existing rules for accessing and interacting with the USPTO’s electronic filing systems must be followed when using AI systems. 

Reactions and applications

While the patent community has overall reacted positively to the new guidance and some of the clarification it provides, some pressure points and apparent ambiguous areas remain.

On the positive side:

  • The USPTO is seen as encouraging the use of AI for the benefit of innovators, practitioners and examiners by cabining the adoption of new restraints on AI-assisted drafting and framing the regulations for AI within the context of the existing rules. 
  • The guidance is a victory in the shorter term by avoiding an alternative approach that may have been broader and more rigid, or one that imposed cumbersome rules on technology that is still developing quickly.
  • By not imposing stringent disclosure requirements for all AI uses and prohibiting certain AI-generated documents from being filed, the USPTO steered clear from overly chilling AI development and innovation in the patent space to the detriment of stakeholders. The guidance serves as a helpful caution to practitioners and innovators who are tempted to use AI tools without fully appreciating the accompanying risks and obligations to the USPTO and other judiciary bodies.

On the challenging side:

  • A presupposition that AI can – in the abstract – autonomously ‘invent’. Although the guidance focuses correctly on the contribution by a human inventor, a framework that begs the question of what was ‘invented’ by a machine versus a human will lead to the proverbial slippery slope.
  • The guidance takes a more proscriptive approach and highlights the ‘dangers’ of using AI in some contexts that seem well suited for AI – such as identifying relevant references for information disclosure statement (IDS) submissions – while leaning on the more ambiguous ‘reasonable enquiry’ obligation to ensure that errors or omissions made by AI (a common and well-documented occurrence) are adequately verified by human practitioners.
  • While the current guidance is a good first step, as AI continues to evolve and blur the lines and push the boundaries between human and machine contributions to inventorship, content generation and verification, the USPTO will likely need to adapt its regulations with more tailored guidance. 


Although AI tools offer an incredible amount of promise and are improving rapidly, their usage implicates a wide range of legal obligations for practitioners and applicants that should be carefully considered by experienced counsel. To avoid running afoul of these obligations, practitioners must carefully review and correct any outputs from AI systems, ensure that patent claims meet the standard for human inventorship and verify that AI systems comply with data security and export control regulations.

Philip Harris is a partner and practice group leader of Holland & Hart’s patent team. Nathan Mutter is also a partner at Holland & Hart specialising in patents.

Email your news and story ideas to: [email protected]