Newsroom icon Client alert

Gen AI and IP ­- Highlights from the USPTO/Copyright Office Public Symposium

Authored by: Michael Messinger, Maureen Kelley, and Maggie Martin

U.S. Patent and Trademark Office (USPTO) and Copyright Office (USCO) personnel, practitioners and other stakeholders gathered yesterday at Loyola Law School in Los Angeles, California to discuss issues at the interface between patent and copyright law for generative artificial intelligence (Gen AI) technologies.

Impacts of recent guidance on authorship and inventorship provided by the USPTO and USCO and Gen AI copyright litigation were discussed.  

In this alert, we include the highlights that we found most helpful in opening remarks.  You can also learn more about the issues highlighted in each of the three sessions below.

Sharp Increase in AI-related Patent Filings

Kathi Vidal, under secretary of commerce for IP and director of the USPTO, gave opening remarks.  She highlighted an exciting role for AI to drive innovation that is unfolding now.  Director Vidal emphasized the USPTO has seen a massive influx of AI-related filings.  In 2020, over 20% of all applications or about 80,000 applications filed at the USPTO involved AI.  More than half of the art units at the USPTO are involved in examining AI. 

USPTO Inventorship Guidance for AI-Assisted Inventions

On Feb. 13, 2024, the USPTO issued extensive guidance on how it will handle inventorship in the examination of AI-assisted invention (USPTO, DOC, 89 Fed. Reg. 10043). In effect now, the guidance includes training materials and examples for assessing proper inventorship.   Comments to the new guidance are due by May 13, 2024.

Following Thaler, the USPTO requires an inventor be a human being.  Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022), cert denied, 143 S. Ct. 1783 (2023).  For AI-assisted inventions, the USPTO will not categorically reject them for improper inventorship.  However, a person using AI technology to invent must have made a significant contribution which qualifies the person as an inventor or joint inventor.  Examiners will find a significant contribution when the Pannu factors are met. See, Pannu v. Iolab Corp., 155 F.3d 1344, 1351 (Fed. Cir. 1998). Under these factors, each inventor must:  “(1) contribute in some significant manner to the conception or reduction to practice of the invention, (2) make a contribution to the claimed invention that is not insignificant in quality, when that contribution is measured against the dimension of the full invention, and (3) do more than merely explain to the real inventors well-known concepts and/or the current state of the art.” 89 FR at 10047.

Copyright Office Authorship Guidance for Works Generated by AI

On March 16, 2023, the USCO issued its guidance for works containing AI-generated material.  88 Fed. Reg. 16190.  The USCO received over 10,000 comments and plans to publish a further report in response in 2024.  Under the current guidance, human authorship is required to obtain a copyright.  The copyright only protects human-authored aspects of the work.  For example, a work with AI-generated material can support a copyright when the human author made sufficiently creative selections or arrangements with respect to AI material.  Just using a Gen AI tool, however, may not be sufficient if the human did not have sufficient control over the work’s expression or make a sufficiently creative selection, prompt engineering or other effort in the work.  There is a requirement to disclose AI assistance with a work to the Copyright Office upon filing a copyright application. 

Against this backdrop of recent Gen AI guidance, three panel sessions discussed different issues that confront artists, inventors, content generators, platforms and other stakeholders.

Session 1: Generative AI as Author or Inventor? A Comparison of Copyright and Patent Analyses

The first panel session was moderated by officials of both the USCO and USPTO: Aaron Watson, attorney advisor, Office of Registration Policy and Practice, USCO; and Thomas Krause, director review executive, PTAB, USPTO.  Panelists were law professors with entertainment industry experience: Sandra Aistars, clinical professor, Antonin Scalia Law School, George Mason University; Xiyin Tang, assistant professor of law, UCLA; and John Villasenor, professor of electrical engineering and law, UCLA.

Panelists noted that Gen AI IP policy and regulation often emphasizes either (1) the work or invention itself or (2) the role of the author or inventor.  This can be seen in both the new inventorship and new authorship guidelines which require an assessment of the work itself, with only a cursory eye towards the “view of the author or inventor” regarding the degree to which AI was responsible for creation.  For certain creative works, this may yield results inconsistent with the author’s intention when the use of AI itself may be intended as a medium or expression.  Guidelines may allow for an assessment of the degree to which a human author controlled the assisting AI to create a new work, which reflects current case law surrounding joint authorship.  

Panelists also discussed outstanding legal questions that remain after the release of the new USPTO guidelines on inventorship.  As AI-assisted inventorship increases, patent litigation is likely to arise in cases where an AI tool’s contribution in the discovery process is more than ancillary.  The meaning of “substantial contribution” by a human inventor to claimed subject matter is undefined in the new inventorship guidelines and likely will have to be worked out in the courts for Gen AI-assisted inventions.  This may arise in future litigation in response to invalidity defenses or claims asserting improper inventorship due to a lack of significant human contribution.  

For authorship of a copyright, to what degree AI-assistance may bar copyrightability was debated.  Some panelists noted that while human authorship is central to the current test, there may be impetus to allow AI authorship under the work-for-hire doctrine by way of overlapping policy incentives.  Additional copyright authorship guidance is expected in the coming months.

Session 2: Litigation Update: Copyright and Artificial Intelligence       

The second panel was moderated by professor Justin Hughes, Loyola Law School, Loyola Marymount University.  The three panelists included a copyright legal expert and two members representing plaintiff and defendant sides in current Gen AI copyright litigation: David Nimmer, of counsel, Irell & Manella; Angela Dunning, partner, Cleary Gottlieb; and Audrey Adu-Appiah, associate, Oppenheim + Zebrak.

Federal Gen AI copyright litigation is proceeding nationwide and includes a number of class actions.  Many of the suits are in early stages of motion practice or discovery.  Recent trends reported by the panelists are a narrowing of asserted claims by plaintiffs.  This has occurred in cases where different claims based on unfair practices or contributory infringement are dropped from Amended Complaints.  The trend in several cases is to focus the litigation essentially to a direct infringement claim under Section 106 and whether a fair use defense applies. 

When assessing Gen AI copyright litigation, the panelists noted it is helpful to consider two buckets, namely, the input side and the output side.  On the input side, whether Gen AI operations infringe turns on the import and use of training data by Gen AI tools. A number of issues can come into play, such as, whether the training data uses a substantially similar work without authorization or makes a transitory copy rather than a more permanent copy.  On the output side, whether a platform faces indirect infringement, such as, contributory infringement, may involve discovery on what types of guardrails are in place to avoid infringing output, the degree of substantial non-infringing uses, and volitional conduct. In some early complaints, plaintiffs themselves are directing AI tools to produce alleged examples of infringing works with AI tools without showing further examples of substantially similar works obtained otherwise.

On the comparison with the earlier Google Books case, panelists generally felt Gen AI tools which generate text and videos were different because the tools lacked many of the restrictions found in the Google Books products.  Also for fair use defense, the purpose and character of use by Gen AI tools is arguably different than the Google Books product.

Several cases were referenced where copyright infringement by Gen AI products is being asserted: Andersen v. Stability AI Ltd. (N.D. Cal. No. 3:23-cv-00201), Concord Music Group, Inc. v. Anthropic PBC (M.D. Tenn. No. 23-cv-01092), Kadrey v. Meta (N.D. Cal. No. 23-cv-03417), Getty Images v. Stability AI (D. Del. No. 1:23-cv-00135), Huckabee v. Bloomberg (S.D.N.Y. No. 1:23-cv-09152), and The New York Times v. Microsoft (S.D.N.Y. No. 23-cv-11195).

Session 3: AI, NIL, and the Lanham Act   

The final panel session was moderated by Jeffrey Martin, attorney advisor, OPIA, USPTO.  Four panelists were from industry (tech and entertainment) and academia: Maureen Weston, professor of law, Caruso School of Law Pepperdine University; Russell Hollander, national executive director, Directors Guild of America; Duncan Crabtree-Ireland, national executive director and chief negotiator, SAG-AFTRA; and Tearra Vaughn, associate general counsel, Meta.

Generative AI poses novel threats to the entertainment industry in the name, image and likeness (NIL) and copyright spaces.  Unauthorized use of private individuals’ NIL has spiked with the advent of generative AI and is spreading quickly across the internet.  The panelists agreed that the “patchwork” state law system in place to protect NIL is woefully inadequate, and the technology is far ahead of the law.

Mr. Crabtree-Ireland focused on Gen AI’s use with performers’ and broadcast journalists’ likenesses to generate unauthorized content and spread misinformation.  Mr. Hollander considered the issue from the perspective of the film industry and made 3 key points: (i) we need more robust copyright laws to cover both inputs and outputs of AI models as well as firm requirements for record-keeping when AI is used; (ii) the law must require informed consent and compensation for authors when original works are used and/or manipulated by AI; and (iii) the law should prohibit unauthorized mutilation and distortion of original works by AI.  Professor Weston focused on use of AI in the sports industry.  She pointed out that the industry has used AI for data collection, but generative AI presents a threat to the publicity and privacy rights of players. 

In addition to the above entertainment industry-specific issues, private individuals are seeing deepfakes of themselves spread on social media platforms.  Professor Weston mentioned a particularly troubling case of deepfake pornography involving California middle schoolers.  She emphasized that social media platforms should be instantly responsive to takedown requests.  However, users have experienced delays in the removal of offending content and frustration in not being able to speak to a real person at these companies when they encounter issues.

The panelists hope for the passage of a federal law to clarify the rights of the affected parties and the responsibility of companies and social media platforms involved.  Mr. Hollander pointed out that the Digital Millennium Copyright Act, which gave companies “a broad grant of immunity” so long as they put a takedown procedure in place, was “completely ineffective.”  He hopes for a more robust system which places more responsibility on social media platforms.  One idea was a social media platform requirement that any AI-generated content be stamped to indicate as such.

Ms. Vaughn from Meta acknowledged the gravity and scope of these issues.  She emphasized that we are at a turning point with AI, similar to one we encountered in the 1990s with the arrival of the internet.  In both instances, the new technologies could be used for good (AI can be used to improve medical diagnoses and corporate efficiencies) and bad, and agreed that AI should be used responsibly.  Ms. Vaughn discussed technological research for policing AI on the internet.  She mentioned tools to distinguish a deepfake from a real person and tools to remove one’s NIL from such content. 

One thing is clear – we will be seeing a growing need for changes in the law in this space. Vorys closely monitors this, and advises clients in a number of areas involving Gen AI including intellectual property, NIL, employment and litigation.

--

The opinions expressed in this alert are those of the authors and not of Vorys or its clients.

Related Professionals

Jump to Page