Supported by
Y CombinatorY CombinatorY Combinator

The Problem

Traditional hearing aids amplify noise

They rely on one sound direction for noise reduction, but both conversation and noise come from many directions.

Not all “machine learning” is the same

Machine Learning for hearing aids today works by enhancing speech and ignoring non-speech, but this doesn’t work in common situations like a noisy cafe, where you have ambient speech.

Technology for the real world

AudioFocus reimagined the hearing aid — so you can follow conversations wherever you go. Our technology uses a proprietary machine learning algorithm that only amplifies voices nearby by analyzing echo statistics, just like our brains do.

Use ML to auto-detect which voices are important in the presence of many.
Design a microphone array that maximizes the sound clarity and noise reduction benefits.
Utilize lower power AI Processors that allow for a discreet and ergonomic design.

Expert Opinions

Clearly the noise reduction is strong and focus is strong which is exactly what people always talk about wanting.

Nicholas Reed

Audiologist & Professor @ John Hopkins

This is really good. Yes, I would definitely use it. I'd be interested in learning more. Thanks for sharing this.

Craig Saltiel


Our approach

Every aspect of the AudioFocus will serve unmet needs around hearing loss.

Care First

Clinically, a person might have ‘normal’ hearing but they can’t hear in background noise. That person deserves care too.


Our aim is to make this technology cost-effective thanks to the recent passing of the Over-The-Counter Hearing Aid Act.


Our data augmentation technology allows us to help people hear in a variety of situations without data farming from them.


Join our waitlist and receive a 10% discount.

Subscribe for updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.