All major AI providers now say "you own your outputs"—but the details differ significantly. Here's what each platform's actual terms say:
| Platform | Output Ownership | Commercial Use | Key Restrictions | Attribution | IP Indemnity |
|---|---|---|---|---|---|
| OpenAI (ChatGPT, DALL·E) |
You retain input rights; OpenAI assigns output rights to you Best | Permitted across all tiers | No competing models; content policies | None required | None (as-is basis) |
| Anthropic (Claude) |
You retain inputs; Anthropic assigns output rights Best | Permitted under commercial terms | No competing models; acceptable use | None required | Enterprise |
| Google (Gemini) |
Google doesn't claim ownership; you own outputs Good | Permitted (Workspace, enterprise) | Standard acceptable use policies | No hard requirement | None dedicated |
| Midjourney | You own "to fullest extent possible" Conditions | Paid subscribers only; revenue thresholds | Public gallery by default; content guidelines | None required (voluntary) | None (user bears risk) |
| Stability AI (Stable Diffusion) |
You own outputs under model license Good | Open licenses; enterprise deals at scale | OpenRAIL restrictions; no competing models | None required | None (broad disclaimers) |
| Microsoft (Copilot) |
You own input and output; Microsoft doesn't claim IP Best | Permitted under business terms | OSS license compliance; safety filters | None required | Full Commitment |
Each platform has unique nuances. Click through to our comprehensive guides for detailed analysis:
Beneath the ownership headline, several recurring contractual themes matter in practical use.
Training Data Policies
OpenAI, Anthropic, Google, and Microsoft now draw clear lines between consumer and business traffic:
- Business/API data: Not used for training by default
- Consumer free tiers: May contribute to model improvement unless you opt out
- Enterprise contracts: Explicit no-training commitments available
"If you are pushing confidential data, proprietary code, or core creative assets through AI, consumer free tiers are the wrong place to do it."
No Competing Model Clauses
Most providers restrict using their outputs to train competing AI models. This includes OpenAI, Anthropic, Midjourney, and Stability AI. Owning an output doesn't give you a free hand to use it for training a rival platform.
Disclosure and Attribution
No platform requires formal attribution, but transparency is trending:
- OpenAI: Encourages disclosure for heavily AI-assisted publications
- Google: Warns against passing off AI content as human where deceptive
- Regulated industries: Finance, healthcare, elections may require disclosure
Every generative AI provider tries to push IP risk away from itself. The baseline is "as-is, no warranty"—but some providers now offer protection.
The Default Position: You're On Your Own
Most platforms provide outputs without promises that they're accurate, non-infringing, or fit for purpose. Liability is aggressively capped. If you publish an AI-generated image that echoes a photographer's portfolio and get sued, the default is that you bear the risk.
The Game-Changers: Indemnity Commitments
🛡️ Microsoft Copilot Copyright Commitment
Defends and indemnifies eligible business customers for copyright claims arising from Copilot outputs, provided guardrails are enabled.
🛡️ Anthropic Enterprise IP Protection
Indemnifies customers for IP claims tied to authorized use of Claude, subject to exclusions for misuse or knowing infringement.
Getty v. Stability AI: What It Means
The November 2025 UK ruling largely favored Stability on copyright grounds, but found limited trademark infringement over Getty watermarks in outputs. Key takeaways:
- Training-data disputes target platforms, not individual users
- Output risks fall on you if you publish infringing content
- Vendor indemnity only helps if you stayed within their guardrails
The contract landscape can be negotiated. The copyright landscape is much less flexible. And in the past year, several landmark decisions have crystallized the legal framework around AI-generated content.
"Human authorship is a bedrock requirement of copyright."
The Thaler Decisions (Final)
In 2023, a D.C. federal court rejected Stephen Thaler's attempt to register copyright in art generated by his "Creativity Machine." In 2025, the D.C. Circuit affirmed. In February 2026, the Supreme Court declined to hear the appeal, making this the definitive statement on AI authorship at the highest judicial level.
Result: Pure AI-generated works cannot be copyrighted under U.S. law. This is now settled law.
The Bright Line (Clarified by 2025 Copyright Office Reports)
The Copyright Office released comprehensive guidance in 2025:
❌ Purely AI-Generated
Images or text produced from prompts with little human editing. Not protectable. Example: A single prompt with no further refinement.
✓ AI-Assisted (Human-Authored)
Human makes creative decisions about selection, arrangement, or substantial modification. Protectable to extent of human contribution. Example: The first AI-assisted image registered (Feb 2025) required 35 iterative edits.
The Fair Use Split: Training Data Lawsuits
Over 70 active lawsuits are testing whether using copyrighted works to train AI models constitutes fair use. Early judicial rulings show a split:
✓ For AI (2 judges)
Rationale: Training is transformative. The AI doesn't copy works; it learns patterns. Output is fundamentally different from input.
❌ Against AI (1 judge)
Thomson Reuters v. Ross: Training on legally licensed content to build a competing product is NOT fair use. Key distinction: pirated vs. legally acquired training data.
Key Cases to Watch in 2026:
- NYT v. OpenAI: Proceeding to trial. 20M ChatGPT logs ordered disclosed (Jan 2026). This will be the most closely watched AI copyright trial.
- Meta Llama case: Allegations of mass piracy in training data. Could set precedent for open-source AI models.
- Bartz v. Anthropic aftermath: The $1.5B settlement (March 2026) did NOT resolve the fair use question—it only compensated authors for past use without granting future licenses.
Music Industry Escalation
Music copyright enforcement against AI has proven even more aggressive than text/image copyright:
- UMG/Concord v. Anthropic (Jan 2026): $3.1 billion lawsuit—the largest music copyright claim against AI
- BMG v. Anthropic (March 2026): $70M lawsuit over music lyrics and sheet music in training data
- Warner Music settlements (2025): Suno and Udio both settled rather than face trial
Takeaway: Music AI faces heightened legal scrutiny. Licensing deals are becoming the industry norm.
International Landscape
- EU: Requires "author's own intellectual creation"—human mind making creative choices. AI Regulation Act (2024) imposed transparency requirements on AI-generated content.
- UK: Has a "computer-generated works" provision, but courts are split on whether it applies to modern AI. Likely headed toward a human authorship requirement.
- Japan: More permissive—allows AI training without explicit permission under certain conditions.
- Most jurisdictions: Converging on a human authorship requirement for copyright protection.
Practical Consequences (2026 Edition)
- You can use AI outputs freely under platform contracts (subject to their acceptable use policies)
- You may not be able to stop others from copying purely AI-generated material—it lacks copyright protection
- You can protect AI-assisted works where human contribution is substantial—document your creative process
- Platform indemnification matters more than ever. Microsoft Copilot and Anthropic Enterprise offer IP indemnification; consumer plans typically don't
- Training data lawsuits are ongoing. The fair use question won't be definitively settled until 2027 at the earliest
- Add substantial human editing to AI outputs before commercial use
- Document your creative process (save prompts, iterations, edits)
- Use API/Enterprise plans for sensitive or high-value content
- Consider platforms that offer IP indemnification if copyright risk is a concern
- Stay current on ongoing litigation—the landscape is still evolving
Treating AI as a legal-grade tool rather than a novelty requires deliberate habits. Here's what determines whether you're building on sand or rock:
💼 Use Business-Grade Plans
For confidential data or core creative assets, use Enterprise/API tiers with explicit no-training commitments.
🎨 Add Human Creativity
For logos and brand visuals, treat AI as ideation. Have humans refine until the final reflects clear creative choices.
📁 Document Everything
Keep prompts, raw outputs, and subsequent drafts. This evidences your human creative process for registration.
📝 Update Freelancer Contracts
Clarify AI use conditions, require disclosure, and warrant that deliverables are eligible for expected IP treatment.
🚩 Red Flag Obvious Echoes
If output contains recognizable characters, logos, or artist styles, treat it as unauthorized derivative work.
™️ Lean on Trademark Law
For AI logos used as source identifiers, trademark protection may be stronger than arguing about copyright.
In U.S. terms, you generally do not own copyright in purely AI-generated material that contains no human authorship. The Copyright Office's guidance and recent Thaler decisions make clear that a machine cannot be the legal author.
However, the major platforms contractually assign to you whatever rights they have and agree not to assert ownership themselves. This means you're free to use, modify, and commercialize outputs as far as the platform is concerned—but you may not be able to use copyright law to stop others from copying purely AI-generated content.
When you substantially revise or integrate AI output into work reflecting significant human creativity, you can hold copyright in your contribution.
Output ownership and data usage are separate clauses. OpenAI, Anthropic, Google, and Microsoft now all take the position that business/API data is not used to train models by default, while consumer chat may contribute unless you opt out.
If you need both ownership and strict confidentiality, use enterprise offerings that explicitly commit to no training on your data.
Contractually, yes—as long as you're on a plan that grants commercial rights. OpenAI, Google, Stability AI, Midjourney (paid), and Microsoft all permit commercial use.
The bigger risks are: (1) an output may be too close to someone else's protected work, leading to infringement claims, and (2) purely AI-generated content may not be protectable by you, so others might reuse it.
There's no single global rule requiring disclosure. In the U.S., there's no general statutory requirement today. Platform policies and sector regulation fill some of that space—OpenAI encourages disclosure for heavily AI-assisted publications, and Google warns against deceptive presentation.
More specific transparency rules are emerging around political advertising, biometric deepfakes, and consumer-protection contexts. In regulated industries, undisclosed AI use is more likely to be scrutinized.
If an AI image or passage is substantially similar to a copyrighted work, you can be sued for infringement even if you never saw the original and the model did the copying. The fact that the model trained on that work is not a complete defense.
Your best protection: don't use obviously derivative outputs, run reverse image searches or plagiarism checks, and correct or remove content promptly if a rights holder objects. For some use cases, choosing a vendor with indemnity can shift the cost of dealing with such claims.
Owning outputs doesn't automatically let you use them any way whatsoever. Most major providers prohibit using their outputs to develop competing models. OpenAI, Anthropic, and others explicitly restrict this.
If you're building an in-house model, base training on data you own outright, license appropriately, or obtain independently. Don't assume scraping your own ChatGPT or Claude transcripts into a training set is contract-compliant.