AI Training Opt-Out Guide

Updated: March 7, 2026
Scope: Practical opt-out steps for the main AI providers people actually use, plus the gotchas that matter for SMBs, nonprofits, consultants, and developers.


The one-minute rule

Before you paste anything sensitive into an AI tool, answer these three questions:

  1. Am I on a consumer/personal plan or a business/work plan?
    Consumer plans often require an explicit opt-out. Business/API plans are often already excluded from training by default.
  2. Did I turn off any data-sharing / model-improvement toggle?
    If not, assume your future chats may be eligible to help improve models.
  3. Am I using feedback buttons, bug reports, connectors, or cloud coding agents?
    Even when training is off, those paths can still create retention or extra sharing.

Fastest safe defaults

If you are an individual using personal accounts

  • Turn off the provider’s model improvement / training / activity toggle.
  • Use temporary/private chat modes when available.
  • Delete old conversations that were created before you opted out.
  • Do not use thumbs-up / thumbs-down / “send feedback” on sensitive chats.

If you are a business

  • Move staff off personal accounts and onto business/team/workspace/API products.
  • For most major vendors, business/API products are already not used for training by default.
  • Still review retention, logging, connectors, and admin controls.

Provider-by-provider guide

1) OpenAI / ChatGPT / API / Codex

Consumer ChatGPT (Free / Go / Plus / Pro in a personal workspace)

Default: OpenAI may use content from consumer services to improve models unless you opt out.

How to opt out

On ChatGPT:

  1. Click your profile icon.
  2. Go to Settings.
  3. Open Data Controls.
  4. Turn off “Improve the model for everyone.”

OpenAI says once you do this, new conversations will not be used to train models.

Extra safe mode

Use Temporary Chat when the conversation is especially sensitive.

  • Temporary Chats are not used to train models.
  • They are deleted after 30 days.
  • They do not appear in history and do not create memories.

Business / Enterprise / API / Codex-style business use

For ChatGPT Business, ChatGPT Enterprise, and the API, OpenAI says inputs and outputs are not used for training by default. API customers can explicitly opt in to share data, for example through certain feedback paths.

Important gotchas

  • Opt-out applies to future conversations, not magically to everything already processed.
  • Voice has a separate control path in some OpenAI products.
  • Business/API not training by default is not the same thing as “no retention, no logs, no admins, no connectors.”
  • If you submit explicit feedback or opt into data sharing, that can change the picture.

2) Anthropic / Claude / Claude Code / API

Claude consumer plans (Free / Pro / Max)

Default: Anthropic gives consumer users a Help Improve Claude setting. If it is on, data from those accounts can be used to improve future models.

How to opt out

On desktop/browser:

  1. Click your name / account menu.
  2. Go to Settings.
  3. Open Privacy.
  4. Under Help Improve Claude, turn the toggle off.

On mobile:

  1. Open your account menu.
  2. Go to Settings.
  3. Open Privacy.
  4. Turn Help Improve Claude off.

Claude Code on consumer accounts

This is the part many people miss:

  • If Claude Code is being used from a consumer Claude account, the same model-improvement setting applies.
  • If the setting is on, Claude says data from those coding sessions can be used to improve future Claude models.
  • If the setting is off, Anthropic says those sessions are not used for future model training.

Retention on consumer Claude Code

Anthropic says:

  • Training allowed: up to 5 years retention for model development and safety improvements.
  • Training off: 30-day retention period.

Team / Enterprise / API / commercial use

Anthropic says commercial offerings—including Team, Enterprise, API, and Claude Code used under commercial terms—are not used to train generative models by default, unless the customer explicitly opts in, such as through the Development Partner Program.

Extra gotchas for Claude Code

Even with training off:

  • Claude Code still sends prompts and outputs over the network to Anthropic when using Anthropic-hosted models.
  • The /bug command sends full conversation history and is retained for 5 years.
  • Telemetry and error reporting can be disabled with environment variables.
  • Non-essential traffic is disabled by default when using certain third-party backends like Vertex, Bedrock, or Foundry.

3) Google Gemini (consumer app)

Consumer Gemini app / gemini.google.com

Default: Google says Gemini Apps data may be used to improve Google’s services and train models. Keep Activity is generally on by default for adults.

How to opt out for future chats

On the Gemini web app:

  1. Open gemini.google.com.
  2. Open Menu.
  3. Go to Settings & help.
  4. Open Activity.
  5. Near the top, click Turn off.
  6. Choose Turn off or Turn off and delete activity.

You can also manage it at:

What this actually changes

Google states:

  • Temporary chats and chats with Keep Activity off are retained with your account for up to 72 hours to provide the service and protect users.
  • Temporary chats are not used to train Google’s AI models.
  • If Keep Activity is off and you do not submit feedback, Google says it also does not use your future chats to improve its AI models.

Extra gotchas

  • Turning off activity is about future chats.
  • Audio / Gemini Live improvement is a separate setting and is off by default.
  • Deleting activity is separate from turning activity off.
  • Personalization based on past Gemini chats is another separate setting.

4) Google Workspace with Gemini

Workspace-licensed Gemini

For eligible Google Workspace with Gemini users, Google says chats and uploaded files are not reviewed by humans and are not used to improve generative AI models outside the domain protections described for Workspace.

Admin controls

Admins can:

  1. Go to Admin console.
  2. Open Generative AI > Gemini app.
  3. Open Gemini conversation history.
  4. Turn history on or off.
  5. If on, set auto-delete to 3, 18, or 36 months.

If history is off, Google says new chats are still saved for up to 72 hours so it can provide the service and process feedback.

Practical takeaway

For org use, Workspace Gemini is far safer than consumer Gemini.


5) Google Vertex AI / Gemini API / Gemini for Google Cloud

Vertex AI

Google says under its cloud terms it will not use your data to train or fine-tune models without your permission or instruction. That is one of the strongest default positions in the market.

Gemini for Google Cloud / Gemini Code Assist / Gemini CLI in Google Cloud context

Google’s docs say prompts and responses in Gemini for Google Cloud are not used to train Gemini models.

Gemini API / Google AI Studio nuance

For billing-enabled Gemini API projects, Google says prompts and responses in logs are not used for product improvement by default. However, owners can opt in to logging, datasets, and feedback sharing with Google; if you share datasets/logs, Google says they may be used to improve products and train future models.

Practical takeaway

  • Vertex AI / Google Cloud path: good for business-sensitive workloads.
  • AI Studio / Gemini API logs sharing: leave any sharing or dataset contribution features off unless you intend to contribute data.

6) Microsoft Copilot (consumer)

Consumer Copilot

Microsoft says it uses real-world consumer data to help train underlying generative AI models, but signed-in users can opt out.

How to opt out

On copilot.com:

  1. Click your profile icon.
  2. Click your profile name.
  3. Go to Privacy.
  4. Turn off Training on conversation activity.
  5. Turn off Training on voice conversations if you want voice excluded too.

On desktop app:

  • Settings > Privacy > Training on conversation activity / Training on voice conversations

On mobile:

  • Menu > Profile > Account > Privacy > Training on conversation activity / Training on voice conversations

Microsoft says opting out excludes your future conversation activities from training.

Extra nuance

Microsoft says you can opt out of model training and still keep personalization on.


7) Microsoft 365 Copilot Chat / Microsoft 365 Copilot / Azure OpenAI

Microsoft 365 Copilot Chat and Microsoft 365 Copilot

For work accounts under Microsoft 365 commercial terms, Microsoft says prompts and responses have enterprise data protection and are governed as customer data under Microsoft’s commercial terms.

Microsoft 365 Copilot uses Azure OpenAI, not OpenAI’s public consumer services.

Training default

Microsoft says Microsoft 365 Copilot customer data is not used to train foundation models.

Important nuance

  • Prompts and responses can still be logged in enterprise systems.
  • Web-grounded features can involve Bing Search.
  • Admins still need to manage permissions, retention, eDiscovery, agents, and extensions.

Azure OpenAI / Azure Direct Models

Microsoft states prompts, completions, embeddings, and training data are not available to OpenAI, are not used to improve OpenAI models, and are not used to train or improve Microsoft or third-party foundation models without permission.


8) GitHub Copilot

Individual users

GitHub says that by default, GitHub, its affiliates, and third parties do not use your prompts, suggestions, or code snippets for AI model training, and this cannot be enabled.

That means the “training opt-out” issue is mostly already handled for GitHub Copilot.

What you can still control

GitHub says individual Pro users can still control:

  • whether prompts and suggestions are collected and retained,
  • whether inline suggestions that match public code are allowed.

These controls live in your personal settings on GitHub.com.

Business / Enterprise

Organizations can manage feature availability, policy controls, and which repositories/agents/features are enabled.

Important nuance

The main risk with Copilot is often scope, permissions, and retention, not model training.


9) Perplexity

Free / Pro / Max

Perplexity says AI Data Retention is enabled by default for Free, Pro, and Max users, but users can opt out.

How to opt out

  1. Go to Account settings.
  2. Open Preferences.
  3. Find the AI data retention toggle.
  4. Toggle it off.

Perplexity notes:

  • opt-outs apply only to future data,
  • previously collected training data cannot be removed from prior training pipelines,
  • Enterprise data is never used for AI training.

Enterprise

Perplexity says enterprise data is never used for AI training.


10) Mistral / Le Chat

Free / Pro / Student

Mistral says users on these plans may opt out of training.

Web/admin console opt-out

  1. Open the Admin Console.
  2. Select Le Chat under Manage.
  3. Under Privacy, disable “Allow your interactions to be used to train our models.”

Mobile opt-out

  1. Open Settings.
  2. Open Data & Account Controls.
  3. Deselect Enable data sharing.

Mistral says once you opt out, your input and output data are no longer used for training. It also warns that documents uploaded to Le Chat count as input data.

Teams / Enterprise

Mistral says Le Chat Teams and Le Chat Enterprise are opted out of training by default.


11) xAI / Grok

Grok on grok.com or the Grok mobile app

xAI says signed-in users can choose whether their data is used for model training.

How to opt out on grok.com

  1. Go to Settings.
  2. Open Data.
  3. Turn off Improve the Model.

How to opt out in the Grok mobile app

  1. Open Settings.
  2. Open Data Controls.
  3. Turn off Improve the model.

xAI says once you turn it off, new conversations will not be used for training.

Private Chat

xAI also says Private Chat content and interactions are not included in model training.

Grok inside X

X says you can opt out through:

  1. Settings and privacy
  2. Privacy & Safety
  3. Data sharing and personalization
  4. Grok & Third-party Collaborators
  5. Turn off the training/fine-tuning data-sharing option

Important gotchas

  • If you voluntarily submit feedback, that feedback may still be used for training.
  • If you use Grok without logging in, xAI says you may not have an opt-out option in some regions.

What “opt out” usually does not do

Even after you opt out, it usually does not mean:

  • your old chats are erased from every internal system,
  • your prompts are never retained at all,
  • logs disappear immediately,
  • admins cannot see anything,
  • connectors stop pulling data,
  • feedback channels stop sharing data,
  • coding tools become “local only.”

In most products, opt-out mainly means:

Your future chats or coding sessions should not be used to train or improve future models, unless you later opt back in or explicitly provide feedback/share data.


Deleting old data after opt-out

If you want the strongest cleanup posture, do all of these:

  1. Turn off training / model improvement first.
  2. Delete sensitive conversations created before the opt-out date.
  3. Turn off chat history / keep activity / conversation history where available.
  4. Use temporary/private chat for highly sensitive work.
  5. Avoid thumbs-up / thumbs-down / bug-report tools on sensitive sessions.
  6. For orgs, move users to business/API products and disable personal account use.

Best practical setup by use case

Personal journaling / psychoanalysis / deeply private writing

Best to worst:

  1. Local-only model on your own machine
  2. Consumer AI with training toggle off + private/temporary mode
  3. Consumer AI with training toggle left on

SMB / nonprofit internal docs

Best to worst:

  1. Business/workspace/API tier under org control
  2. Consumer plan with every privacy toggle off
  3. Staff using random personal AI accounts

Source code / repos / CLI coding agents

Best to worst:

  1. Business/API/commercial coding setup with repo controls
  2. Consumer tool with training off but broad repo access
  3. Personal consumer account + unrestricted access + feedback/bug tools enabled

My blunt practical advice

If the content matters, do not rely on memory or assumptions about a vendor.

Use this rule:

  • Consumer plan: manually opt out
  • Business/API plan: verify that no-training is the default, then check retention and logs
  • Coding tools: treat repo access, telemetry, and bug-report commands as separate risk paths

Quick reference table

ProviderConsumer defaultConsumer opt-out pathBusiness/API default
OpenAI / ChatGPTTraining may occur unless opted outSettings → Data Controls → Improve the model for everyone → OffBusiness / Enterprise / API not used for training by default
Anthropic / ClaudeConsumer setting controls trainingSettings → Privacy → Help Improve Claude → OffTeam / Enterprise / API not used for training by default
Google Gemini appKeep Activity can enable model improvement useGemini → Settings & help → Activity → Turn offWorkspace Gemini protected; not used to improve models in the normal consumer sense
Microsoft Copilot consumerConsumer data can help train unless opted outProfile → Privacy → Training on conversation activity / voice → OffMicrosoft 365 Copilot and Azure OpenAI not used to train foundation models
GitHub CopilotNot used for AI model training by defaultNo training opt-out usually needed; manage retention/settings on GitHub.comOrg admins control features/policies
PerplexityAI data retention on by default for consumer plansAccount → Preferences → AI data retention → OffEnterprise never used for training
Mistral Le ChatConsumer plans may share unless opted outPrivacy / Data & Account Controls → disable sharingTeams / Enterprise opted out by default
xAI GrokSigned-in users can chooseSettings → Data / Data Controls → Improve the Model → OffCheck enterprise agreement; consumer-style controls still matter

Final caveat

This guide is “complete” for the mainstream providers and products most people actually use as of March 7, 2026. It is not a promise that every niche model host, wrapper, browser extension, or SaaS plugin follows the same rules. For any AI product layered on top of another provider, check:

  1. the app’s own privacy policy,
  2. whether it is using consumer or enterprise terms underneath,
  3. whether it adds its own logging, storage, or human review.