Trademark Trouble in the Age of Generative AI: When AI Outputs Create IP Liability

Michael Callahan

Share Post:

Generative AI systems can now produce sophisticated images, videos, text, and audio in seconds. But as these systems improve, they increasingly raise a difficult legal question: what happens when an AI model generates content containing someone else’s intellectual property – especially recognizable brands, logos, or fictional characters?

This issue sits at the intersection of copyright, trademark, and unfair competition law, and it is already beginning to produce high-stakes litigation. Courts will soon need to determine whether existing doctrines can handle AI outputs—or whether entirely new legal frameworks are required.

The Emerging Problem: AI Outputs Containing Protected IP

Many popular generative models can produce outputs that resemble well-known characters, brand logos, or distinctive artistic styles. For example, prompts such as:

  • “Create an image of Darth Vader eating pizza”
  • “Design a sneaker with the Nike swoosh”
  • “Make a Pixar-style animated scene”

can produce images that closely resemble protected intellectual property.

This creates two potential legal issues:

  1. Copyright infringement if the output reproduces protected expressive elements.
  2. Trademark infringement or dilution if the output uses a recognizable brand identifier.

The complexity arises because no human intentionally copied the protected work in the traditional sense—the output is generated probabilistically by a trained model.

Early Litigation Is Already Emerging

Major entertainment companies have begun testing these theories in court.

One notable example involves litigation by The Walt Disney Company and Universal Pictures against Midjourney, alleging that the image-generation platform produces images depicting protected characters such as Darth Vader and Minions without authorization.

The studios argue that the AI company is effectively enabling mass production of infringing images. The defendants, by contrast, are likely to argue that:

  • the system merely responds to user prompts,
  • the outputs are not copies of any specific work, and
  • the technology is comparable to other creative tools used by artists.

How courts resolve these arguments could have sweeping implications for the entire generative AI ecosystem.

Trademark Law Raises Distinct Problems

While copyright law focuses on copying protected expression, trademark law focuses on consumer confusion and brand identity. AI outputs could create several trademark-related issues.

1. Likelihood of Confusion

If an AI generates an image containing a recognizable logo, such as the Nike swoosh or Apple logo, users might assume the brand authorized or sponsored the content.

Traditional trademark infringement asks whether the use is likely to cause confusion about source, sponsorship, or affiliation. AI-generated imagery could easily trigger this concern if it convincingly depicts branded products.

2. Trademark Dilution

Even when consumers are not confused, famous marks are protected against dilution, meaning uses that blur or tarnish the mark’s distinctiveness.

For example, AI-generated images placing a famous character or logo in inappropriate contexts could arguably dilute the brand.

3. False Endorsement

AI systems can also generate images of celebrities endorsing products that they never actually promoted. That raises potential claims under:

  • trademark law (false endorsement), and
  • state right-of-publicity laws.

As deepfake technology improves, these claims are likely to become far more common.

Who Is Liable for AI-Generated Infringement?

Perhaps the most difficult legal question is who should bear liability when an AI system generates infringing content.

Potential defendants include:

  • The user
  • The AI company
  • Both parties

Courts may apply theories of contributory or vicarious infringement.

This issue is complicated by the fact that AI outputs are not deterministic; a user may not know exactly what image the system will generate until after it appears.

Section 230 Probably Won’t Apply

Some technology companies might hope to rely on the immunity provided by Section 230 of the Communications Decency Act. However, Section 230 contains an important limitation: it does not apply to intellectual property claims.

That means AI companies cannot easily avoid liability for trademark or copyright infringement using the same defenses that social media platforms often invoke.

Platform Safeguards Are Emerging

To reduce legal exposure, AI developers are already implementing safeguards such as:

  • prompt filtering (blocking requests involving certain brands or characters)
  • output moderation systems
  • style restrictions to prevent imitation of specific artists
  • watermarking or labeling AI-generated content.

These measures may help demonstrate that companies are taking reasonable steps to prevent infringement, which could become relevant in contributory liability analyses.

The Bigger Question: Are Existing IP Doctrines Adequate?

The deeper issue is whether traditional intellectual property law can adequately address AI-generated outputs.

Several unresolved questions remain:

  • Is an AI output that resembles a copyrighted character actually a “copy” of that work?
  • Should liability depend on the training data used to build the model?
  • Do AI tools function more like creative instruments or like automated infringing systems?

Courts will likely grapple with these questions for years.

Why This Issue Matters

The stakes are enormous. Generative AI is rapidly becoming integrated into marketing and advertising, entertainment production, design and branding, along with everyday consumer applications.

If courts impose strict liability for AI outputs containing protected IP, companies may need to implement much stronger filtering systems or obtain extensive licensing agreements with rights holders.

On the other hand, if courts treat generative AI as a neutral creative tool, the technology could dramatically expand the ways in which users create and remix cultural content.

Either way, the next wave of litigation will play a pivotal role in defining how intellectual property law operates in the era of artificial intelligence.

ABOUT THE AUTHOR

Michael Callahan

Michael joins Milgrom Daskam & Ellis as a law clerk, where he works in the litigation and intellectual property practice groups. During his time at CU Law, Michael has served as a volunteer with the Korey Wise Innocence Project as part of a small team advocating on behalf of wrongfully convicted individuals in Colorado. He also serves as vice president of the Student Animal Legal Defense Fund, where he has organized fundraisers and donation drives for local animal shelters. Before joining the firm as a law clerk, he worked as a constitutional law research assistant for the University and as a litigation intern at a small Denver law firm.

More Articles

Intellectual Property

Important Considerations for Assessing the Likelihood of Confusion of Trademarks with Foreign Terms

To determine whether a trademark is registrable or whether it infringes the trademark rights of a senior trademark owner, the U.S. Patent and Trademark Office (“USPTO”) and courts, respectively, weigh various factors, called the “du Pontfactors,” to assess whether a likelihood of consumer confusion exists between the trademark and another mark, i.e., whether consumers would confuse the goods and/or services provided under the respective marks as coming from the same source.

Read More »