To solve this we created a ray tracing auralization framework called Floyd. Floyd takes a 3D model of a room and patients’ head recordings to simulate millions of echoes.
For each room we generate two versions of it, one where it’s loud and noisy and one where it’s not. Then we train our model to map the noisy one to the quiet one.
We simulate tens of thousands of sound reflections per echo to make sure they’re indistinguishable from real world echoes. By leveraging dozens of GPUs across the Amazon cloud we can do this at scale.
We looked for existing hardware options to support our breakthrough software. We found limiting trade-offs – wearability, battery life, and so on. We’ve decided to build it ourselves.