Noisy places hinder human connection

AudioFocus designs and manufactures hearing aid technology that let’s people hear what matters most, even in noisy places.

We know the problem deeply

We’ve talked to countless patients and audiologists from all over the country about the challenge of noisy places and three things are clear:

Noise comes from speech too [1]

The noisy voices are often just as loud [2]

Desired Speech can come from many angles [3]

Translating cues into understanding

How we determine which sounds are important and which ones are not

Spectrum cues tell us if this is a voice or not. This can’t distinguish important voices vs. irrelevant ones.

Sound intensity cues tell us which voice is loudest. This can’t work when the noise is as loud as the desired speech.

Directional cues tell us which sound is in front.  This can’t work if there are multiple speakers or if you turn your head.

Distance Cue

One cue that’s been ignored in research is the distance cue, which relies on echo statistics[4]. The basic idea is to detect the volume difference between the direct sound and echo of it. Our algorithm uses this cue to determine which voices important and which ones are not.

Why has no one ever taken advantage of this echo data?

Trick question. It doesn’t exist anywhere.

A dataset with endless diversity

To solve this we created a ray tracing auralization framework called Floyd. Floyd takes a 3D model of a room and patients’ head recordings to simulate millions of echoes.

The perfect output to train on

For each room we generate two versions of it, one where it’s loud and noisy and one where it’s not. Then we train our model to map the noisy one to the quiet one.

Indistinguishable from reality

We simulate tens of thousands of sound reflections per echo to make sure they’re indistinguishable from real world echoes. By leveraging dozens of GPUs across the Amazon cloud we can do this at scale.

Purpose-built hardware

We looked for existing hardware options to support our breakthrough software. We found limiting trade-offs – wearability, battery life, and so on. We’ve decided to build it ourselves.

A unique microphone array

By using a novel microphone array design we can capture echo signatures from multiple angles to maximize benefits for our patients.

Custom, ultra low power ASIC

We’re partnering with custom ASIC processor designers to deliver this technology to you in a wearable form factor. These chips can run deep learning models in single digit milliWatts.

[1] Pang, Jermy, et al. "Adults who report difficulty hearing speech in noise: An exploration of experiences, impacts and coping strategies." International Journal of Audiology 58.12 (2019): 851-860.
[2]  Rusnock, Christina F., and Pamela McCauley Bush. "Case study: an evaluation of restaurant noise levels and contributing factors." Journal of Occupational and Environmental Hygiene 9.6 (2012): D108-D113.
[3] Archer-Boyd, Alan W., Jack A. Holman, and W. Owen Brimijoin. "The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids." Hearing Research 357 (2018): 64-72.
[4]  Zahorik, Pavel, Douglas S. Brungart, and Adelbert W. Bronkhorst. "Auditory distance perception in humans: A summary of past and present research." ACTA Acustica united with Acustica 91.3 (2005): 409-420.