Submitted by: Daniel Cardosi, Lucas Baltzell, Virginia Best, Boston University

For more information about the teams research please visit:

Solving the cocktail party problem

One of the biggest challenges in the development of hearing aids is performance in settings with multiple sound sources, known as ‘cocktail party’ listening environments. Distinguishing speech from background noises such as music and competing talkers is complex. To do so, the human brain performs a set of tasks simultaneously, using cues associated with the direction of sound sources, voice characteristics (pitch and speed for example), and visual information such as lip movement and gestures. Those with hearing impairment find these complex auditory scenes particularly difficult, and hearing aids are insufficiently up to the task of helping them out.

Daniel Cardosi, Lucas Baltzell and Virginia Best of Boston University set out to develop a new speech-enhancement algorithm for hearing aids to solve this "cocktail-party problem". The Tympan, as a portable device that allows for experimenting in real-world environments, is ideal for testing such algorithms and has a powerful Teensy processor for running algorithms both in stereo and in real time. For this project, the algorithm was written in the Teensyduino software and uploaded to a Tympan Rev E fitted with a Tympan AIC CODEC daughterboard. A pair of Tympan BTE earpieces was used for recording and playback.

The algorithm, designed to improve speech intelligibility, aims to strengthen binaural cues, the interaural differences critical for spatial hearing. How? Binaural cues are preferentially sampled during acoustic onsets: the short-duration, high-energy peaks within amplitude envelopes (in the case of speech, at the beginnings of phonemes: the distinct units of sound that distinguish one word from another). Sound signals may be manipulated with expansion and compression algorithms to steepen these rising-amplitude segments while also attenuating constant-amplitude segments, the result of which – for a listener – is an increased binaural sensitivity. The research team hopes that this approach will improve listeners’ abilities to hear and understand speakers in multi-talker environments.

Processed/enhanced (dotted line) and the unprocessed (continuous line) sound signal of the spoken sentence 'sound that noble accident of the air'.

The ASA2021 design challenge provided a great opportunity for the team to utilize the Tympan for this type of research work. The team will continue their research to improve hearing devices and plans to test their algorithm in field trials and laboratory experiments, in which the Tympan can play a useful role. This will ultimately help those with hearing impairments perform complex real-world listening tasks and help solve the cocktail-problem.




About the ASA2021 Tympan Design Challenge

During the ASA (Acoustical Society of America) conference in June 2021, Tympan hosted a design challenge: What is possible with the Tympan?

10 exciting new applications were submitted and presented at the following ASA conference: Enhancements of hearing aids, spatial acoustic processing and smart earphones and much more. Stay tuned if you want to learn what is possible and to keep track of future developments.