As SaaS tools add AI-powered functionality to their platforms, customers increasingly request an AI addendum to their Master Services Agreement (MSA) to avoid delays in procurement and renewal. If you’re a privacy leader or product counsel tasked with creating an AI addendum, this guide is for you.
AI addenda usually aren’t shared publicly, so finding a strong example to emulate can be a challenge. This AI prompt does that work for you, creating a template or first draft of an AI addendum that product counsel can quickly iterate on.
First, let’s go over some AI addendum best practices, or you can jump directly to the prompt.
Do you need an AI addendum?
Because most existing MSAs fail to address key AI governance risks like model training and hallucinations, privacy-minded buyers request these addenda to fully understand and mitigate the risks posed by a vendor’s AI tools. If your prospects and customers have difficulty finding answers to AI governance questions, an AI addendum can help expedite their research.
It’s important to note that an AI addendum, while helpful, is not a requirement for every contract. Some vendors choose to disclose AI processing directly in the MSA and, depending on the nature of the AI usage, this may be more than sufficient. If you’re considering whether to request AI addenda from your own vendors, start by reviewing the vendor’s available AI documentation before determining if an AI addendum is necessary.
What should an AI addendum include?
An AI addendum should present clear definitions, roles, safeguards, and disclaimers while also showing alignment with security and privacy and addressing liability.
Definitions
- Clarity of which features are covered in the AI addendum (i.e. which features are considered AI features)
- What counts as customer content or input
- What your AI features deliver as output
- How you define training or model improvement
- Whether you use a third-party AI provider and which ones
Roles
Once your definitions are clear, your AI addendum should outline which parties have ownership or access to which elements of the functionality.
- Typically, a customer owns input (their data) and has the right to use output, while the provider retains rights to the models and underlying system. This should be explicitly stated.
- Note that the provider’s obligations flow down to your sub-processors, including any AI model providers.
- Identify if the provider can use de-identified/aggregated telemetry and anticipate some customers may negotiate over this.
Safeguards
Describe how AI features are designed and operated in accordance with responsible AI principles and customer expectations. Include:
- A clear answer to whether customer data is used to train models
- Description of other data usage limits, such as whether customer data is used for service improvement and if it will be shared with subprocessors/model providers
- Summary of how long data will be retained by the model (if at all)
- Identify if data will be isolated from that of other customers
- Outline any measures to prevent biased or discriminatory outputs from the model
Disclaimers & acceptable use restrictions
Protect your business by outlining your tool’s literal and proposed limitations. Typical inclusions:
- Output may be inaccurate or incomplete
- A human review is encouraged for all AI outputs
- Outputs should not be relied upon for high-risk decisions
- The customer is responsible for how they choose to apply AI output
- Reverse engineering and model extraction are prohibited
- Applying the AI for illegal content, generating malware, creating scams, building competing models, or to identify individuals or infer sensitive traits is prohibited
Security & privacy alignment
Your AI addendum should cross-reference your Data Processing Agreement (DPA), subprocessor terms, confidentiality obligations, and security measures for data handled by AI components.
Customers may also request language aligned to specific GDPR, CCPA or EU AI Act obligations. Some customers may expect support for regulatory inquiries, DPIAs, or general record keeping.
Liability
Your AI addendum must address indemnities and liability allocation, but expect that some customers will heavily negotiate over this. Your addendum may answer:
- Will you indemnify for IP infringement related to AI output?
- Are AI claims carved out of liability caps?
- Are there specific AI-related limitation-of-liability clauses?
Why start with this AI prompt?
This AI prompt gives you a head start drafting an AI addendum by focusing on the best practices your customers will look for in the final product. At a baseline, it will include:
- Definitions (input, output, model, training data)
- Clear statement on model training and customer data
- Sub-processor and model provider flow-downs
- Output ownership and assignment
- Output-related IP indemnity
- Tenant separation and data isolation
- Bias and discrimination guardrails
- Regulatory inquiry support
You don’t want to end up with an AI addendum that doesn’t mention training, claims you own the output, or buries risk inside sub-processors. This prompt gives you a starting point that avoids red flags, helping your product make it through procurement faster and keeping your customers relieved that their data is safe with you.
How should I use this prompt?
This AI prompt can be used by legal teams to start a first draft of an AI addendum and privacy or AI governance leaders to organize material for legal counsel’s review.
You can input the prompt into a service like GPT-5 or Gemini. Remember not to input private, confidential, or proprietary information into a free model that may train on or share your data: we recommend using contracted enterprise tools for prompts like this.
Any AI-generated draft should be reviewed by qualified legal counsel before being incorporated into a binding agreement. The prompt gives you a major head start on the task, but it doesn’t eliminate the task.
On the prompt below, replace all bolded text with your own context. After you submit the prompt, also upload any existing AI addendum drafts, your standard MSA AI language (if available) and any relevant product description or security doc excerpts.
Here’s the exact AI prompt you can copy and paste
You are a privacy attorney + commercial contracts specialist. Your job is to draft a clean, contract-ready AI Addendum to attach to a Master Services Agreement (MSA) with a software vendor.
1) Context
Customer name: [CUSTOMER]
Vendor name: [YOUR COMPANY’S NAME]
Product/service: [PRODUCT]
Covered use case: [USE CASE – (ex: chatbot, analytics, email, support)]
Regions impacted: [US / EU / UK / global]
Data types involved: [(include all that apply): personal data, sensitive data, HR data, financial data, health data, children’s data, proprietary business data]
Does the service use third-party LLM/model providers? [YES/NO/UNKNOWN]
Risk tolerance: [Low / Medium / High]
2) Inputs
I will paste one or more of the following:
- A) The vendor’s proposed AI Addendum (if provided)
- B) The vendor’s standard MSA AI language (if included)
- C) The product description / security doc excerpts (optional)
Your job is to generate a standardized AI Addendum baseline that I can use as my starting point for negotiation.
3) Required output: AI Terms Addendum (contract language)
Draft an AI Terms Addendum with the following sections and requirements:
Section 1 — Definitions
Include clear definitions for:
“Covered AI Services”
“Model”
“Input”
“Output”
“Training Data” (vendor training data vs customer data)
Section 2 — Customer Responsibilities
Customer is responsible for the lawfulness of Inputs and for its usage decisions. Keep this reasonable and standard.
Section 3 — Vendor Commitments (non-negotiable baseline)
Include all of the following:
Vendor will NOT use Customer Data, Inputs, or Outputs for training, retraining, fine-tuning, or improving any model or AI system (unless explicitly authorized in writing)
Inputs and Outputs are Customer Confidential Information
Vendor will comply with applicable AI + data protection laws
Vendor has obtained all necessary rights/permissions for any vendor training data it uses
Section 4 — Subprocessors + Model Provider Flow-Down
These obligations apply to subcontractors, including third-party model/LLM providers
Vendor must have written agreements at least as strict as this addendum
Vendor remains responsible for subprocessors’ acts/omissions
Section 5 — Tenant Separation / Data Isolation
Customer Data + Outputs must be held in logically or physically separate tenant infrastructure
Include controls to prevent commingling across customers
Section 6 — Bias / Discrimination Guardrails
Vendor will use reasonable and industry-standard efforts to prevent unlawful bias or discrimination in Outputs
Section 7 — Regulatory Inquiry Support
Vendor will provide reasonable assistance and documentation to support Customer compliance and respond to regulatory inquiries related to the Covered AI Services
Section 8 — Ownership
Vendor owns the AI service itself
Customer owns all Inputs and Outputs
Vendor assigns all rights, title, and interest in Outputs to Customer
Section 9 — Indemnification (Outputs included)
Vendor’s IP indemnity applies to Outputs, with standard carveouts:
(i) customer bypassing safety systems
(ii) customer modification of outputs
(iii) insufficient rights in customer input data
(iv) customer use/distribution in a way it should have known infringes
4) Second output: Summary + negotiation view
After the contract language, also provide:
A plain-English summary of what this addendum protects
A redline checklist of what vendors usually push back on + recommended fallback positions
5) Style constraints
Write in professional contract language
Avoid undefined terms
Keep it readable and enterprise SaaS appropriate
Now generate the AI Terms Addendum.
Before writing the addendum, extract any missing or risky clauses from the vendor’s version and list them as “Gaps Found.”
Final takeaways
AI won’t replace human legal judgment, but this prompt keeps you focused on the terms that actually matter to your customers and legal team. You can use this prompt and others like it to:
- Ensure that when you build from scratch you’re using the right ingredients for the job
- Get started faster to unblock other teams and accelerate your business’ velocity
- Prioritize the most logic-driven work that truly demands your attention
Building prompts of your own? Share them with our community in our #ai-labs channel, a space for privacy professionals to share their wins and challenges applying AI to their work.