Skip to content
AI & Privacy

Does ChatGPT train on the notes I send it?

Updated May 14, 2026

Whether OpenAI trains on your content depends entirely on which product you're using and which settings are enabled. Here's the 2026 reality.

ChatGPT free and Plus (consumer)

  • Default: yes, your conversations are used for model training.
  • To opt out: Settings → Data Controls → "Improve the model for everyone" → off.
  • Effect of opt-out: your conversations are still stored for 30 days for abuse monitoring, then deleted. Not used for training.

ChatGPT Team and Enterprise

  • Default: no, content is never used for training.
  • Logging: conversations are stored for the user's history but excluded from training pipelines.

OpenAI API (the surface most apps use)

  • Default since March 2023: API requests are NOT used for training.
  • Retention: stored for 30 days for abuse monitoring, then deleted, UNLESS the developer has signed a "zero data retention" (ZDR) agreement with OpenAI, in which case requests are deleted immediately.
  • Most apps using the API don't pay for ZDR — Notion, Mem, Reflect, etc. use standard API which has 30-day retention.

What this means for notes apps:

  • Notion AI: powered by OpenAI API. Notion's privacy policy states they have a zero-retention agreement, so OpenAI doesn't keep your content. But Notion themselves can still read it because it passes through their servers unencrypted.
  • Mem: similar — API-based, opted out of training, but Mem holds the data unencrypted.
  • Apple Notes (with Apple Intelligence): Apple's on-device AI does not train on your content. Cloud-routed requests go through Apple's Private Cloud Compute, which Apple says doesn't store content beyond the request. No third-party AI provider involved.
  • Némos: uses Apple's on-device Foundation Models exclusively. No content ever leaves your device for AI processing. No OpenAI involvement.

What about non-OpenAI providers?

  • Anthropic (Claude): API not used for training by default. Consumer Claude.ai conversations not used for training by default since 2024.
  • Google (Gemini): free Gemini consumer plan trains by default; can opt out. Workspace AI doesn't train.
  • Microsoft Copilot: doesn't train on enterprise data; consumer terms vary.

The trust gap:

Even when policies say "we don't train," you're trusting:

  • The provider's policy is accurate.
  • The provider's policy doesn't change without notice.
  • The provider's *employees* don't access the data (the technical capability often exists).
  • The provider isn't compelled by law to hand it over.

For genuinely sensitive content (medical, legal, financial, journalistic sources), the only safe path in 2026 is on-device AI — Apple Foundation Models, local LLMs via Ollama or LM Studio, or apps explicitly designed around on-device processing.

Quick reference card:

ToolTraining defaultOpt-out?Provider holds data?
ChatGPT free/PlusYesYesYes (encrypted at rest, accessible to OpenAI)
ChatGPT Team/EntNoN/AYes
OpenAI APINoN/AYes (30 days)
Notion AINoN/AYes (Notion + OpenAI ZDR)
Apple IntelligenceNoN/ANo (on-device) or PCC (no storage)
NémosNoN/ANo (fully on-device)

If privacy is a deciding factor, the on-device options are the only ones that don't require trusting a third party.

Related questions

More on AI & Privacy

Deeper dives