AudioFocus designs and manufactures hearing aid technology that let’s people hear what matters most, even in noisy places.
We’ve talked to countless patients and audiologists from all over the country about the challenge of noisy places and three things are clear:
How we determine which sounds are important and which ones are not
Trick question. It doesn’t exist anywhere.
To solve this we created a ray tracing auralization framework called Floyd. Floyd takes a 3D model of a room and patients’ head recordings to simulate millions of echoes.
For each room we generate two versions of it, one where it’s loud and noisy and one where it’s not. Then we train our model to map the noisy one to the quiet one.
We simulate tens of thousands of sound reflections per echo to make sure they’re indistinguishable from real world echoes. By leveraging dozens of GPUs across the Amazon cloud we can do this at scale.
We looked for existing hardware options to support our breakthrough software. We found limiting trade-offs – wearability, battery life, and so on. We’ve decided to build it ourselves.