HeRBiE - Binaural sound-localization on a mobile robot

Diploma/Master's Thesis: Speech detection on a mobile robot [assigned]

Submitted by Tom Goeckel on 10. January 2011 - 16:18


Our sound localization framework should be adapted to speech signals to enhance human-machine interaction. In order to do this, the system should be able to distinguish between speech signals and ambient noise. Currently, the whole sound mixture that is captured by the microphones is taken into account to establish the sound localization map.

Diploma/Master's Thesis: Massively parallel digital signal processing using CUDA [assigned]

Submitted by Tom Goeckel on 10. January 2011 - 13:04


To simulate the filtering processes of the cochlea a digital signal should be segregated into an arbitrary number of narrowband signals. In subsequent stages, the software should be able to process the narrowband signals in real time. This requires a high degree of parallelism that is unsufficiently supported by currently available CPUs.


Submitted by stf on 9. December 2007 - 15:10

Biological algorithms of sound localization may be useful in studying acoustic orientation of robots. One of us (Lakemeyer) collaborated in the software design of a mobile robot that can give guided tours through exhibitions, among other things. We now want to equip this robot with a sound localization and a speech recognition system. The Jeffress model of binaural interaction contains delay lines and coincidence detection. It is realized in the barn owl. We have implemented a Jeffress-like model on a computer and have tested it with noise and speech signals (Calmes et al., in preparation). This model works pretty well even in cluttered surroundings. Currently we are implementing the software on the robot.

Homepage: HeRBiE @ Bio II