I'm an ER doc with 8,400 saved articles. Here's the iPhone setup that finally works.
How clinicians use an iPhone-first second brain to wrangle CME PDFs, patient anecdotes, and reference snippets — without violating HIPAA or losing the dictation you took at 3am.
Every clinician I know has the same problem. A residency-era folder of PDFs on Dropbox. Eight thousand "save for later" tabs in Safari. A Notes app with 200 snippets that all start "pt c/o..." and end three words later because the trauma bay paged. Maybe a Zotero library nobody opens. And the one thing that actually matters — that one diagnostic pearl you heard from an attending in 2022 — is gone.
The CME-overload problem is actually a capture problem
The American Board of Internal Medicine wants 100 MOC points every 5 years. Most specialties land somewhere between 50 and 250 CME credits a year. That's a podcast a week, an article a day, a journal club every other Friday, plus whatever your hospital's grand rounds throws at you. Nobody — *nobody* — reads it all in the moment.
So you save it. To Pocket, to Instapaper, to a Notes folder, to "I'll email it to myself." And then comes the second failure: the patient case three months later where you swear you read something exactly relevant, and you can't find it.
The real bottleneck isn't reading speed. It's retrieval. You don't need another reader app. You need a second brain that knows what's in your screenshots, your voice memos, and your saved articles — and surfaces them when the patient walks in.
What "private" actually means for a doctor
Every "AI note-taking" app pitched to clinicians sends your text to a cloud model. That's a HIPAA problem if you're transcribing anything patient-specific, and it's a liability problem even if you're transcribing your own thoughts about patients. Most physicians I talk to draw the line at "I don't trust OpenAI with my dictation."
Apple's Foundation Models framework — shipping in iOS 26 — runs a 3-billion parameter LLM on-device. No network call. No vendor agreement. No "your data may be used to improve our models." That's the only version of AI note-taking that's actually defensible in a medical context.
The 4-app stack most doctors are running (and the problem with each)
- Apple Notes — fine for grocery lists, terrible for retrieval. Search doesn't index attached PDFs. No tagging. No connections.
- Notion — slow on cellular in a basement OR, requires login, and the free tier has been gutted. Wrong tool for capture.
- Drafts — beautiful one-tap voice-to-text, but it's a launcher, not a library. Things go in. They don't come back out.
- Otter.ai — sends everything to a server in Tennessee. Read the Nemos vs Otter comparison before you transcribe a single patient anecdote.
The pattern: capture is solved (Voice Memos, screenshot, "share to..."), but the second brain layer underneath is missing.
The actual workflow: how an attending uses Némos in a shift
Here's a real-ish day, slightly fictionalized:
6:42 AM, Apple Watch. You're on the bus. The hospitalist podcast drops something about a new vasopressor protocol. You raise your wrist, hold the side button, and dictate: "double-check Norepi titration ceiling 2 mg/min vs the new sepsis pathway." Six seconds. Does Némos work on Apple Watch? — yes, the watchOS companion captures audio offline.
9:15 AM, between patients. You screenshot a UpToDate diff-dx tree on a rare presentation. Némos OCRs the screenshot on-device, extracts the keywords, files it under "Endocrine" automatically — because on-device AI read the text without ever leaving the phone.
1:30 PM, lunch. You're scrolling a New England Journal email. Article on perioperative beta-blockade. One swipe — share to Némos. The full text gets summarized by Foundation Models into 5 bullets, plus a citation block ready to paste into your CME tracker.
11:50 PM, post-call. You type a single sentence into the FAB: "interesting case — 34F atypical chest pain, troponin 0.04, normal coronaries, possible MINOCA." Némos tags it, links it to your previous MINOCA notes from 2024, and reminds you in 6 weeks to check the follow-up.
This is not a hypothetical. This is the second brain pattern that knowledge workers in finance and law have been using for years. Doctors just got it last.
The retrieval moment is the whole game
Three weeks later, that 34F walks back in. You open Némos, type "MINOCA young female," and the original voice memo, the screenshot of the angio report, AND the NEJM piece you saved about platelet activation in MINOCA all surface in one view. You take 12 seconds to refresh, instead of 25 minutes to remember whether you saved it to Drafts or Pocket.
The retrieval moment is the entire ROI. Every other "knowledge management" tool — Notion, Roam, even Obsidian — optimizes for the *capture* moment. Doctors don't have a capture problem. We have a retrieval problem. The difference matters.
What about references and citations?
Némos isn't a citation manager — Zotero, Paperpile, and EndNote each do that job better. But Némos does something they don't: it remembers why you saved the article. A voice memo, a screenshot of the abstract, your scribbled note "use this for the chronic pain talk" — they all stay attached to the source PDF.
When you do sit down to write a CME submission or a grand-rounds talk, you don't open Zotero and stare at 4,000 entries. You open Némos and ask, in plain English, "what did I save about chronic post-surgical pain last spring?" The on-device LLM searches your own text, your own voice transcripts, your own screenshots. It doesn't search the web. It searches *you*.
Two privacy red flags to watch for
Before you commit to any clinical capture tool, audit it for these:
- Does it send screenshots to a server for OCR? If yes, walk away. Even "anonymized" image data has been re-identified at major hospitals (Mayo published a piece on this in 2024). Look for "on-device OCR" specifically.
- Does it store voice memos in plain text on the vendor's cloud? Most "AI scribes" do. The right answer is "encrypted on-device, sync via CloudKit if you opt in." Is Némos private? covers what we do and don't see.
What I'd actually recommend tomorrow
- Stop fighting Apple Notes. Move your CME and conference snippets to a real second-brain tool.
- Pick on-device AI, not cloud AI. On-device vs cloud AI explains why this matters more for clinicians than for any other profession.
- Capture voice memos on watch, search them on phone. The watchOS companion is the cheat code most doctors miss. Best voice recording app for iPhone covers the alternatives.
- Audit your existing stack quarterly. If a patient case three months ago is in 4 different apps, the system isn't working. Best screenshot app for iPhone is where most clinicians start.
Médicine is the original "knowledge work" profession. We have 2,400 years of practice making careful notes and losing them. The iPhone is the first capture device good enough to actually fix this — but only if the second brain layer underneath respects what we're putting into it.
The HIPAA-shaped problem that no clinical app talks about
Let's be honest about what HIPAA actually means for a personal capture layer. HIPAA does not apply to your private musings about a patient if those musings stay on your phone, in your head, or in your personal notebook. It applies the moment you share Protected Health Information with a third party — and a cloud AI vendor is absolutely a third party. Most physicians I talk to have never actually read 45 CFR § 164.502, but they have the right instinct: "if I dictate a voice memo about Mrs. Patel's atypical presentation into Otter, I just sent identifiable health information to a server in Tennessee."
A defensible personal capture workflow looks like this. First, you write in non-identifiers. "34F atypical chest pain" is not PHI. "Mira Patel, MRN 884401, atypical chest pain" is. The on-device transcription should be smart enough to flag potential identifiers and prompt you to genericize them before the note is saved. Second, your sync layer should be opt-in and end-to-end encrypted. Apple CloudKit with Advanced Data Protection turned on is the only consumer-grade sync that meets this bar in 2026. Third, every export should require an explicit confirmation — no auto-backups to Google Drive, no "import from Photos" that silently uploads. Is Némos private? walks through the architecture in more detail.
The good news is that on-device AI removes 80% of the HIPAA exposure surface in one move. The transcription happens on the Neural Engine. The OCR happens on the Neural Engine. The classification happens on the Neural Engine. The model never sees the cloud. Even if the device is lost or stolen, FileVault-equivalent encryption keeps the captures inaccessible without your passcode or Face ID. This is materially safer than the standard "Apple Notes synced to iCloud with the default settings" baseline that most clinicians are already at.
Residents vs attendings: the workflows are completely different
I've watched a third-year resident and a 12-year attending use the same capture app for a week each. They use it for completely different things, and a working tool has to handle both.
Residents capture in fragments of fear. The 11:47pm page from the on-call intern. The "wait, what's the dose of dexmedetomidine in a 4-year-old again" moment. The 5am "I think I missed something on that EKG" replay. Their captures are short, dense, and almost always followed by a frantic search for a reference. The retrieval window is hours, not months. The right tool for a resident is a fast-capture layer that surfaces personal-reference snippets within seconds — "show me what I saved last week about pediatric sedation dosing" — without forcing them to organize anything in the moment.
Attendings capture in patterns. The case they want to use for grand rounds next quarter. The journal article they read on the train that's going to inform a guideline change. The teaching pearl they want to remember to share with the residents at next Thursday's morning report. Their captures are longer, more reflective, and the retrieval window is weeks to years. The right tool for an attending is a synthesis layer that lets them search across hundreds of captures by theme — "what have I saved about acute mesenteric ischemia in the last 18 months" — and pull a coherent teaching narrative out.
Némos handles both because the underlying primitives — voice memo, screenshot, quick note — are the same. The on-device AI does the work of classifying them into the right retrieval pattern. A resident's "double-check fentanyl drip rate" surfaces in seconds. An attending's "the Lancet piece on functional MRI in vegetative states" surfaces six months later when they're prepping the talk. Same app, different timeframes, no friction either way.
Drug-interaction reference capture: the workflow that pays for itself
Every clinician has a personal mental library of drug interactions they've actually encountered. Not the giant Lexicomp database — your personal subset of "the warfarin + Bactrim case I missed in 2022," "the SSRI + tramadol serotonin syndrome we caught last summer," "the QT prolongation cluster on the ICU rotation." That library is what makes you faster than a green resident with a Lexicomp app open in front of them.
The capture pattern: every time you encounter a drug interaction in clinical practice, take 45 seconds to dictate it. Patient demographics (generic — "elderly female on multiple meds"), the interaction observed, the mechanism if you understand it, the catch (did you catch it, did pharmacy catch it, did the patient self-report), and what the resolution was. Save it. The on-device AI tags it with the drugs involved and the interaction type. Three years later you have a personal interaction library of maybe 200 entries — your own internal heuristic for which combos to worry about most. Search "QT" and they all come up. Search "elderly polypharmacy" and the relevant subset comes up. This is the kind of knowledge work that distinguishes a 10-year clinician from a 2-year clinician, and most of it is currently lost because nobody has a capture layer good enough.
Journal club tracking the way it should work
Most academic medical centers run journal club weekly or biweekly. The pattern is the same everywhere: someone picks a paper, the group reads it, you discuss it for an hour, and three weeks later nobody remembers the conclusions. The institutional memory of journal club is essentially zero.
A working personal system: every journal club, you take three captures. (1) Screenshot of the abstract and the methods figure. (2) Voice memo of the *discussion* — not the paper itself, but what the smart attending at the table said about its limitations. (3) A typed one-liner: "would I change practice based on this? Why or why not?" Tag with the topic. Done. By the end of a residency, you have 150 journal club captures, all searchable, all surfaceable when a patient case reminds you of one. The voice memo is the real value — it's not the paper, it's the *interpretation* of the paper by people you trust. That's the part that disappears from PubMed.
CME conference learning capture (the actual ROI argument)
You're at the SCCM conference in San Diego. You attend 12 sessions across three days. By the time you fly home you've forgotten 60% of it. By the time you write the trip report for your division chief you've forgotten 80%. By the time you actually need the knowledge — six months later when a similar case shows up — you remember almost nothing.
The Némos pattern at a conference: after every session, take two minutes to dictate a voice memo. Speaker, topic, the three most important points, your reaction, the one slide you screenshotted. Tag with the conference name. That's it. Two minutes per session, 12 sessions, 24 minutes of total capture across three days. When you get home, the on-device AI summarizes all 12 voice memos into a structured "what I learned at SCCM 2026" note that becomes the seed of your trip report. Six months later, when the relevant case shows up, you search the conference tag and the voice memo plays back with full context.
A board-certified intensivist I talked to runs this loop and estimates it has materially changed his behavior in maybe 40 patient encounters a year that he can point to specifically. Forty cases a year where the relevant conference insight surfaced *because he captured it well*, not because his memory is unusually good.
Patient-shared anecdotes: the workflow with the trickiest privacy framing
This one needs careful handling. Patients sometimes say things in the room that you want to remember — not for clinical reasons but because they teach you about being a doctor. The grandmother who said "the worst part of dying isn't the dying, it's pretending I'm not for my daughter's sake." The 6-year-old who explained their seizure aura better than any neurologist you've worked with. These moments make you better at the job, and they're routinely lost.
The pattern: capture the insight, not the identifier. Voice memo immediately after the visit: "an elderly woman with metastatic disease told me today that her real fear was making her daughter watch the decline — this changes how I should be framing prognosis conversations." No name. No MRN. No specifics that could identify her. The insight stays with you. The HIPAA exposure is essentially nil because there's no PHI in the capture. The on-device transcription locks it down further — even if the device is compromised, the note is still anonymous. Over a career, this builds into a personal philosophy archive that no medical school course could replicate.
On-call quick-capture: the 3am loop
On-call is the highest-cognitive-load environment in medicine. You are tired, paged from sleep, expected to make decisions, and rarely able to break to "go take notes." A capture system that doesn't work at 3am doesn't work at all.
What works at 3am: voice, on watch, while walking to the patient room. Hold side button, dictate while moving: "Bed 14, post-op day 2, new onset AFib with RVR, hemodynamically stable, called surgery for hold on heparin pending TEE." Released. The capture lands on the phone, transcribed by morning. At sign-out, you read it back. At handoff, you have a precise record of what happened and when. Three months later when the same patient bounces back, you have your own contemporaneous note about the original episode — not just the EHR's sanitized version.
A senior hospitalist I spoke to runs this loop for every overnight admit. Says it has cut her sign-out time in half and reduced "wait, when did that happen?" moments to near zero. The investment is 20 seconds per capture. The return is the rest of the call shift being less chaotic.
Hospital wifi vs LTE capture: the offline-first detail
Most hospitals have terrible wifi in patient-care areas. The basement OR, the back hallway of the ICU, the stairwell where you actually have time to think — all dead zones. Any capture tool that requires a network round-trip is dead at exactly the moments you need it most. Does Némos work offline? is the architecture answer: everything works offline, sync happens when you're back on wifi, and the on-device AI doesn't care either way because it's running on the Neural Engine, not in the cloud.
This sounds like a minor feature. It is actually the entire game. A cloud-only capture tool fails in the basement OR. A second brain that works offline becomes your default. Network independence is what makes the tool always present rather than occasionally useful.
Specialty differences: ER vs primary care vs surgery vs psych
The capture workflow varies materially by specialty, and a working tool has to flex.
Emergency medicine is fragment capture under pressure. Every shift produces 40-60 discrete encounters, most resolved in under an hour, some requiring follow-up. The ER doc's capture pattern is short voice memos at the end of shift — "interesting case tonight, 22-year-old with..." — and then nothing more until the next shift. The retrieval window is "did I see something like this before?" and the AI has to surface relevant prior captures fast.
Primary care is longitudinal pattern capture. Same patients over years. The capture pattern is "Mr. Lee's BP has been creeping despite the addition of amlodipine, consider lifestyle factors at next visit" — short, future-tense, attached to a person. The retrieval window is "what did I notice about Mr. Lee last quarter?" and the AI has to surface a coherent narrative across multiple captures.
Surgery is technique capture. The pattern of a surgical second brain is "today's case — encountered an aberrant right hepatic artery, used [technique] to manage, took 15 extra minutes but felt safer." Captures are tied to procedures, not patients. The retrieval window is "have I seen this anatomical variant before?" and the AI has to surface technique-level notes from across the years.
Psychiatry is insight capture with the strictest privacy bar. Every capture involves what is essentially the most sensitive PHI in medicine. The pattern is "today's session illuminated something general about how trauma manifests in [age group]" — abstracted to the level of principle, never patient-specific. On-device AI is non-negotiable for this specialty; cloud AI is a hard "do not use."
Némos handles all four because the underlying primitives are the same. The on-device intelligence does the work of making them feel like different apps.
What about Granola, Otter, and the new wave of "AI scribes"?
The "AI scribe" category — Abridge, DAX, Suki, and the dozens of clones — is solving a real problem: clinical documentation burden. They're good at what they do. They are also not personal second brain tools. They are clinical documentation tools, integrated with the EHR, sold to hospitals on a per-physician license. Your private thinking, your reactions, your conference notes, your journal club captures — none of that belongs in an AI scribe. The scribe is for the patient encounter note that ends up in Epic. The second brain is for *you*.
If you're a telehealth provider using Granola for meeting transcription with patients, that's a separate compliance conversation entirely. The pattern for telehealth is: scribe app for the encounter (with proper BAA), Némos for your private reflections about your own practice, never the two crossed.
Where to go next
If you want the longer landscape, the top 10 second brain apps for 2026 roundup is the most-cited piece on this site. If you're a privacy-first clinician specifically, jump straight to private AI note-taking on device.
Or skip the reading: Némos is free on TestFlight today. Join the waitlist on the homepage and we'll send you an invite the moment a slot opens.
Other guides for your role
- LawyersWhy 73% of associate attorneys we surveyed use Apple Notes wrong (and what to do about it)
- DesignersI took 14,000 inspiration screenshots in 2 years. Here's the iPhone system that made them findable.
- DevelopersI'm a senior dev. I lost 400 voice ideas to AirPods this year. Here's the iPhone fix.
- ParentsTwo kids, one iPhone, zero brain cells. The capture system that saved my year.
- PodcastersI prep 4 podcast episodes a week. Here's the iPhone-only system that replaced 3 apps.