Can I search voice memos by what was said inside them?
Updated May 14, 2026
Searching voice memos by spoken content is one of iOS 18's quiet wins. Here's how it works and where it falls short.
Native iOS 18+ method:
- Open Voice Memos → tap the search bar at the top.
- Type any word you remember saying — "espresso machine", "meeting Tuesday", "Sarah".
- Voice Memos returns recordings where that word appears in the auto-generated transcript.
You can also search via Spotlight (swipe down from the home screen) — voice memo transcripts are indexed across the whole device.
The four limitations:
- Transcripts only exist for recordings made on iOS 18+. Older recordings need to be re-played and manually transcribed, or run through a transcription app.
- Foreign-language transcription quality varies. English, Spanish, French, German, Italian, Japanese, Chinese (Simplified & Traditional), and Korean are well supported. Other languages may not transcribe at all.
- Audio quality matters. Recordings with background noise, mumbling, or distant microphones produce noisy transcripts with many gaps. Search will miss those words.
- No semantic search. If you said "the place with the green door" but search for "restaurant", iOS won't connect them.
For better search across a large library:
Némos solves three of these limitations:
- Runs transcription on older recordings too (you can batch-import voice memos from Apple's app).
- Adds *semantic* search on iOS 26 devices using Apple's Foundation Models — search "where I talked about the espresso machine" and it finds the recording even if you didn't say that exact phrase.
- Lets you tag recordings (a 2-second action) so finding them is even faster.
The pro move: every Sunday, spend 5 minutes scrubbing through new voice memos. Add a one-word tag ("ideas", "meeting", "todo", "shopping"). In a year you'll have a fully searchable audio library that's faster than any text note system.