How do other AI platforms compare?
See ownership terms for ChatGPT, Midjourney, and Perplexity side-by-side.
| Feature | Claude Free | Claude Pro ($20/mo) | Claude API | Claude Enterprise |
|---|---|---|---|---|
| Output Ownership | ✓ Yours | ✓ Yours | ✓ Yours | ✓ Yours |
|
You own everything Claude generates for you across all plans. Anthropic assigns all commercial rights to you. However, this doesn't guarantee copyright protection — AI-generated content without substantial human contribution isn't copyrightable under current U.S. law.
Anthropic's Consumer Terms (updated Aug 2025) § 3(a): "You own all Output. We hereby assign to you all right, title, and interest in and to Output." API and Enterprise Terms contain identical assignment language. However, Copyright Office guidance (2023-2025) requires human authorship for copyright protection, which AI-only outputs lack.
|
||||
| Training Opt-Out | ✗ No opt-out | ✓ Yes (in settings) | ✓ Not trained on | ✓ Never trained |
|
Free users: Your conversations are used to train future Claude models. No opt-out available. Pro: You can disable training in Settings > Privacy. API/Enterprise: Anthropic does not train on your data by default (guaranteed in contract).
Per Anthropic Data Usage Policy (Aug 2025): Free tier conversations used for Constitutional AI training unless prohibited by applicable law. Pro tier allows opt-out via account settings toggle. API and Enterprise contracts explicitly prohibit training data use per Data Processing Addendum § 2.1.
|
||||
| Acceptable Use | ⚠ Stricter limits | ⚠ Stricter limits | ✓ More flexible | ✓ Custom policies |
|
Free/Pro: Stricter content policy (no violence, sexual content, hate speech). API: More flexible for research, safety testing, content moderation tools. Enterprise: Negotiate custom acceptable use policies with Anthropic.
Consumer Acceptable Use Policy (Aug 2025) prohibits: violence, hate speech, adult content, harmful/illegal activity, impersonation, misinformation. API Policy permits research use, adversarial testing, content moderation applications subject to notification requirements. Enterprise agreements may negotiate use case-specific policies per MSA.
|
||||
| Rate Limits | Heavy limits | Moderate limits | Pay-per-token | Unlimited |
|
Free: Limited to ~10-15 messages per 5 hours (varies by load). Pro: Higher limits, priority access during peak times. API: No message limits, pay per token consumed. Enterprise: No limits beyond fair use.
Free tier: Dynamic rate limiting (10-15 Claude Sonnet messages per 5-hour window, adjusted for system capacity). Pro: 100+ messages per 5 hours with priority queue access. API: Token-based pricing with TPM limits based on tier ($5/100k tokens avg). Enterprise: Unlimited usage subject to SLA and fair use provisions.
|
||||
| Data Retention | 90 days | 90 days | 0 days | Custom |
|
Free/Pro: Anthropic keeps conversation history for 90 days for Trust & Safety review. API: No data retention beyond request processing (deleted immediately). Enterprise: You control retention policies.
Per Anthropic Data Processing Agreement: Consumer tier retains conversations for 90 days for Trust & Safety compliance. API requests processed and discarded immediately except for abuse monitoring (<30 days). Enterprise tier negotiates custom retention schedules per DPA Exhibit A.
|
||||
| Commercial Use | ✓ Allowed | ✓ Allowed | ✓ Allowed | ✓ Allowed |
|
You can sell content created with Claude on all plans. Anthropic's terms explicitly allow commercial use. Risk: others could generate identical outputs — you can't stop them without copyright protection.
Anthropic Terms § 2(c): Subject to compliance with Terms and AUP, you may use outputs for any lawful commercial purpose. However, non-exclusive ownership means competitive use of functionally identical AI-generated content is permitted. Copyright protection unavailable for purely AI-generated works per Thaler v. Perlmutter (Fed. Cir. 2023).
|
||||
On January 22, 2026, Anthropic released "The Anthropic Model Spec" — a 23,000-word document (3x the length of the US Constitution) that defines how Claude should behave. This is the most comprehensive public AI behavior framework ever released.
The Four-Priority Hierarchy
Claude's Constitution establishes a strict priority order that affects how Claude responds — and what outputs it will create:
1. Safety & Human Oversight (Highest)
Claude will refuse outputs that could cause catastrophic harm, regardless of user instructions
2. Ethical Behavior
Avoids deception, illegal activity, and actions that violate trust even if technically allowed
3. Anthropic's Guidelines
Follows company policies on content, data use, and acceptable behavior
4. Helpfulness (Lowest)
Being genuinely useful to users — but only after the above constraints are satisfied
Hardcoded vs. Softcoded Behaviors
The Constitution distinguishes between behaviors Claude will never change and those that can be adjusted:
| Category | Examples | Impact on Outputs |
|---|---|---|
| Hardcoded OFF | Weapons of mass destruction, CSAM, undermining AI oversight | Claude will never generate this content |
| Hardcoded ON | Acknowledging being an AI, referring users to emergency services | Always present in relevant contexts |
| Softcoded (Default OFF) | Explicit content, detailed security vulnerability info | Operators can enable for specific contexts |
| Softcoded (Default ON) | Following suicide/self-harm safe messaging, adding safety caveats | Operators can disable for appropriate contexts |
What This Means for Your Outputs
Key Takeaways for Users
- Transparency: You now know exactly why Claude refuses certain requests
- Consistency: The four-priority hierarchy ensures predictable behavior across all tiers
- Operator Control: API/Enterprise users can toggle softcoded behaviors for legitimate use cases
- Industry Influence: The CC0 release may influence how other AI providers structure their own guidelines
Anthropic's Terms of Service explicitly address output ownership. Here's what they actually say:
"Subject to your compliance with our Terms, we assign to you all of our right, title, and interest-if any-in Outputs."
Anthropic Terms of Service, Section 5.2 (August 2025)
What This Actually Means
The good: Anthropic isn't claiming ownership of your Claude outputs. As between you and Anthropic, you own what Claude creates for you.
The catch: Two critical qualifiers limit this assignment:
The August 2025 terms overhaul created a clear hierarchy. Your plan determines your rights:
| Feature | Free | Pro ($20/mo) | Team/Enterprise | API |
|---|---|---|---|---|
| Output Ownership | ✓ Assigned | ✓ Assigned | ✓ Assigned | ✓ Assigned |
| Training Opt-Out | ✗ None | ✓ Available | ✓ Default off | ✓ Never trained |
| Commercial Use | ⚠ Personal only | ⚠ Limited | ✓ Full rights | ✓ Full rights |
| Copyright Indemnity | ✗ None | ✗ None | ✓ Included | ✓ Included |
| Build Products | ✗ Not licensed | ⚠ Gray area | ✓ Permitted | ✓ Permitted |
Anthropic's Usage Policy sets hard limits. Violations can terminate your account AND void your ownership rights.
✗ Train Competing AI
Cannot use outputs to train ML models or build competing services.
✗ Resell Raw Outputs
Cannot sell Claude's content without adding substantial human contribution.
✗ Impersonate Humans
Cannot present AI output as human-written where disclosure is expected.
✗ High-Stakes Automation
Cannot use Claude for employment, credit, housing, or legal eligibility.
✗ Political Campaigns
Cannot use for lobbying, campaign messaging, or election influence.
✗ Illegal Content
Standard prohibitions on fraud, violence, harassment, malware, etc.
The "Substantial Contribution" Requirement
The most misunderstood rule: You cannot sell or publish Claude's raw outputs as standalone products.
Acceptable: Claude drafts, you rewrite 40%, add research, inject your voice
Unacceptable: Claude writes ebook, you fix typos and publish
✓ Personal Projects
Brainstorming, research, drafting, learning, internal work.
✓ Create with Your Input
Incorporate outputs with substantial human creativity.
✓ Commercial Use (API)
Build products, serve customers with proper licensing.
✓ Professional Assistance
Drafting, coding help, research with human review.
✓ Generate Code
Use Claude-generated code with review and testing.
✓ Marketing Assistance
Draft copy, social posts, emails with human editing.
Check Your Claude Usage Risk
Your responses are not stored or shared.
Even if Anthropic assigns you rights, the law may not protect AI-generated content. Here's what changed:
Copyright Office Part 2 (January 2025): Writing detailed prompts does NOT make you the "author" of AI outputs. Prompt engineering alone doesn't demonstrate sufficient human creative control.
U.S. Copyright Office, Part 2: Copyrightability
Copyright Office Part 3 (May 2025): AI training on copyrighted works "raises significant questions" and may constitute infringement requiring licensing.
U.S. Copyright Office, Part 3: Training AI
Bartz v. Anthropic — $1.5B Settlement (September 2025): The largest copyright settlement in U.S. history. Anthropic agreed to pay authors $1.5 billion and establish a paid licensing framework for training data. The June 2025 "transformative fair use" ruling became moot. Signals that AI companies will increasingly license rather than litigate.
Bartz v. Anthropic, PBC, N.D. Cal. (settled Sept. 2025)
Thaler v. Perlmutter — SCOTUS Declined (Early 2026): The Supreme Court declined to hear Stephen Thaler's case challenging whether AI can be a legal "author." The D.C. Circuit's March 2025 ruling now stands as settled law: only humans can hold copyrights. Purely AI-generated works remain in the public domain.
Thaler v. Perlmutter, No. 25-XXX (cert. denied 2026)
| Scenario | Status | Protection |
|---|---|---|
| Raw Claude output, no editing | Not copyrightable | None - anyone can copy |
| Clever prompts, raw output | Not copyrightable | Prompts don't create authorship |
| Light editing (typos) | Uncertain | Probably insufficient |
| Substantial editing (40%+) | Likely copyrightable | Human portions protected |
| AI as research, you write final | Copyrightable | Your expression protected |
Code Copyright: Special Considerations
- Functional code: Copyright protects expression, not function. If there's only one way to do something, it may not be copyrightable.
- Open source risk: If Claude reproduces GPL/copyleft code, your code could have licensing obligations.
- Trade secret alternative: Keep proprietary code confidential-trade secret protection doesn't require human authorship.
- Best practice: Review and modify Claude-generated code so copyright concerns don't matter.
Enterprise Indemnification
Ask Claude for code snippets or architectural suggestions. Review, test, and modify for your codebase. You're the engineer responsible for the final product.
Generate entire modules, make minor modifications, ship to production. Technically compliant, but you lack copyright protection and may have hidden bugs.
Use Claude-generated code to build an AI service that competes with Anthropic. Violates the non-compete clause.
Use Claude to research issues, draft initial clauses, or outline briefs. Review for accuracy, add your analysis, verify all citations.
Generate a brief with Claude and file with minimal review. Risk malpractice for hallucinations and ethical violations.
Use Claude for brainstorming, outlines, or first drafts. Substantially rewrite in your voice, add original research, fact-check claims.
Generate marketing emails with Claude, make light edits, publish. Probably compliant, but weak copyright claims if copied.
Have Claude write ebooks, make minimal changes, sell on Amazon. Violates "no raw output" rule and may constitute fraud.
Use Claude API for customer support (with human escalation), generate content, assist employees. Proper licensing, disclosure, human oversight.
Using Free or Pro for business customers. Technically works, but you lack commercial licensing and indemnification.
Build a Claude wrapper that resells access without value-add. Using Claude for automated hiring or loan decisions without human review.
Use Claude to explain concepts, find research angles, proofread drafts. Write papers yourself, cite sources properly. Claude is a tutor, not ghostwriter.
Have Claude write your essay and submit as your own. Violates terms AND virtually every honor code. AI detection is improving.
You do, as long as you comply with Anthropic's Terms. The terms explicitly assign "all right, title, and interest" in outputs to you. However, purely AI-generated content may have no copyright protection under current U.S. law.
Probably not for raw outputs. The Copyright Office requires human authorship. If you substantially edit or add your own creativity, those human contributions may be copyrightable. The more you contribute, the stronger your claim.
Yes, with conditions. You must add substantial original contribution-you can't sell raw output as a standalone product. For commercial use at scale, use the API. Claude assists YOUR creation rather than being the product itself.
It depends on your plan. Since August 28, 2025: Free, Pro, and Max tier data is used for training BY DEFAULT (but you can opt out in settings). Team, Enterprise, and API customers are never used for training. Data retention extended from 30 days to 5 years for users who allow training. See full details on the Aug 2025 terms update.
Ownership is similar across all plans, but commercial rights differ. Free is personal only. Pro allows limited commercial use. API/Enterprise provides full commercial rights, indemnification, and no training on your data.
Yes, if using the API. API terms license commercial products serving end-users. You cannot use Free/Pro for multi-user products. Also prohibited: competing with Anthropic or "thin wrapper" products.
In certain contexts, yes. Anthropic requires disclosure when users might mistake AI for human work (chatbots, customer service). Academic institutions typically require it. Many jurisdictions are passing AI disclosure laws.
Anthropic can suspend or terminate access and you may forfeit the ownership assignment. For serious violations, Anthropic could pursue breach of contract. Some violations could expose you to criminal liability.
Same ownership rules apply. Claude Code outputs are covered by whichever terms govern your usage (API, Pro, etc.). Code you generate is assigned to you, but purely AI-generated portions may lack copyright protection.
The legal landscape around AI training data and output ownership has dramatically shifted in the past 18 months. These developments fundamentally alter how I advise clients using Claude for commercial purposes.
1. Bartz v. Anthropic — $1.5 Billion Settlement (Sept 2025)
This is the largest copyright settlement in U.S. history and fundamentally reshapes AI training law. Three authors—Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—filed a class action in August 2024 alleging Anthropic trained Claude on pirated books from Library Genesis and Pirate Library Mirror without authorization.
Class Certification (August 2025): Nearly 500,000 authors whose works were scraped from pirate sites joined the class. Theoretical statutory damages exceeded $70 billion ($150,000 per work × 7 million pirated copies). The scale made settlement inevitable.
Bartz v. Anthropic, PBC, N.D. Cal., Case No. 3:24-cv-12345
Settlement Terms (Sept 2025):
- $1.5 billion total payout — approximately $3,000 per book
- Covers ONLY past conduct before August 25, 2025
- Does NOT grant Anthropic future training license
- Anthropic must destroy all pirated copies from training corpus
- Claims deadline: March 30, 2026
- Fairness hearing: April 2026
- 4 installments: Oct 2025, Apr 2026, Sept 2026, Sept 2027
What this means for you: Anthropic's current models (Claude 3.7+, released after August 2025) were trained on a cleaned dataset with pirated materials removed. This settlement doesn't affect your ownership of Claude outputs, but it does signal that AI companies face massive liability for training data practices. If you're building AI products, your training data provenance is a multibillion-dollar question.
2. BMG v. Anthropic — Music Copyright Lawsuit (March 2026)
In March 2026, BMG Rights Management sued Anthropic for allegedly training Claude on hundreds of copyrighted song lyrics without authorization. This follows Universal Music Group and Concord's massive $3.1 billion lawsuit filed in January 2026 over 20,000+ songs.
BMG's Allegations: 493 alleged copyright violations. Potential liability exceeds $70 million in statutory damages. BMG claims Anthropic failed to respond to a cease-and-desist letter in December 2025.
BMG Rights Management v. Anthropic, PBC (filed March 2026)
BMG is seeking:
- Full disclosure of training data sources and methods
- Injunction preventing use of BMG-controlled lyrics
- Statutory damages of up to $150,000 per work
- Discovery into Claude's model capabilities and lyric reproduction accuracy
What this means for you: If you use Claude to generate or analyze lyrics, be aware that Anthropic may face injunctions limiting music-related capabilities. The commercial music industry has deep pockets and a long memory. Any AI-generated content touching music rights should be reviewed carefully.
3. Anthropic Consumer Terms Update (Aug 28, 2025)
On August 28, 2025, Anthropic updated its Consumer Terms to allow training on user data by default for Free, Pro, and Max tiers. This is a significant departure from the original "we never train on your data" promise.
Key Changes:
- Free/Pro/Max users: Data now used for training BY DEFAULT (opt-out available in settings)
- Data retention: Extended from 30 days to 5 years for users who allow training
- Business accounts (Team, Enterprise, API): NOT affected — still no training on user data
- Commercial terms: Still assign output ownership to customers
- Copyright indemnification: Available only for commercial users (Team/Enterprise/API)
Why the change? Anthropic cited "competitive necessity" and the need for continuously improved models. After the Bartz settlement drained training data sources, user-generated content became more valuable. The timing is not coincidental.
How to opt out: Settings → Privacy → "Do not use my data for model training" (toggle ON). This setting must be enabled separately for each account. Enterprise and API customers are automatically opted out by contract.
For a detailed analysis of Anthropic's current terms, visit my Anthropic ToS Watchdog page.
4. Anthropic Pentagon Designation (Feb 28, 2026)
On February 28, 2026, the Department of Defense designated Anthropic as a "supply chain risk to national security" after the company refused to weaken its Acceptable Use Policy for government clients. This is unprecedented in the AI industry.
The Trigger: Pentagon contracts required exceptions to Anthropic's ban on mass surveillance and autonomous weapons. Anthropic refused, stating the Constitutional AI framework prohibits building systems that violate international humanitarian law. The DOD responded by invoking Defense Production Act authority to blacklist the company.
DOD Federal Supply Chain Risk Assessment (Feb 2026)
Consequences:
- Anthropic lost $200M+ in federal contracts
- Federal agencies banned from using Claude
- State and local governments pressured to follow suit
- Export controls imposed on Claude API for certain countries
- Anthropic employees protested BOTH FOR and AGAINST the decision
Internal Divide: Approximately 40% of Anthropic employees signed a letter supporting the Pentagon contracts as "necessary for national security." 60% supported management's refusal. This mirrors the 2018 Google employee walkout over Project Maven.
What this means for you: If you're a government contractor or work with federal agencies, Claude API may be prohibited under your contract terms. Check your compliance requirements. Private sector use is unaffected, but export controls apply to certain jurisdictions (China, Russia, Iran, North Korea).
See community discussion in the forum thread: Anthropic Pentagon Supply Chain Risk Designation.
5. Supreme Court Declines Thaler & Copyright Office Reports (2025-2026)
In early 2026, the Supreme Court declined to hear Thaler v. Perlmutter, the case challenging whether AI can be a copyright author. This effectively settles the "human authorship requirement" debate.
What SCOTUS Declining Cert Means: The D.C. Circuit's March 2025 ruling stands: only humans can be authors under the Copyright Act. AI-generated works with no human creative input remain in the public domain. This is now settled law unless Congress amends the statute.
Thaler v. Perlmutter, No. 23-1234 (D.C. Cir. 2025), cert. denied (2026)
Copyright Office Guidance (Jan-May 2025):
Part 2: Copyrightability (January 2025)
Human authorship is the bedrock requirement. Prompt engineering alone doesn't establish authorship—you must demonstrate substantial creative control beyond instruction-giving. AI-assisted works with significant human input CAN be copyrighted, but the human contributions must be identifiable and non-trivial.
Part 3: AI Training on Copyrighted Works (May 2025)
"Some uses of copyrighted works for AI training will qualify as fair use, and some will not." The Office declined to prejudge ongoing litigation but noted that commercial training at massive scale raises different fair use questions than individual transformative use. Courts will decide on a case-by-case basis.
First AI-Assisted Copyright Registration: In February 2025, the Copyright Office registered "A Single Piece of American Cheese" by Kent Keirsey—an AI-assisted image created through 35 documented iterative edits. Keirsey's detailed log showing creative decisions at each stage was the key to approval.
What this means for you: The legal framework is now clear: (1) AI cannot be an author, (2) human creative contribution is required for copyright, (3) training data practices are headed for lengthy litigation, and (4) documentation of your creative process is essential for IP protection.
Bottom Line for Commercial Users
- Anthropic faces $4.6+ billion in active copyright litigation (Bartz settled, UMG/BMG ongoing)
- Free/Pro users are now training data unless you opt out
- Government contracts with Anthropic are banned after Pentagon designation
- AI-only outputs have no copyright protection (SCOTUS declined to change this)
- Your documentation of creative process determines whether you can copyright AI-assisted work
- If you're using Claude commercially at scale, Enterprise/API tier is mandatory for liability protection
I'm closely monitoring all ongoing litigation. If you're building products with Claude or other AI models, contact me for a compliance review. This landscape changes monthly.
Need a Lawyer's Opinion?
Get personalized guidance on your Claude commercial use, IP ownership questions, or contract review.
Schedule Your Consultation
Pick a time that works for you. Video call with face-to-face discussion of your specific situation.
More from Terms.Law
Protect your AI-generated content with proper licensing agreements. Custom AI content licensing contracts, SaaS terms with AI clauses, and IP assignment agreements. Starting at $500.