Our sound localization framework should be adapted to speech signals to enhance human-machine interaction. In order to do this, the system should be able to distinguish between speech signals and ambient noise. Currently, the whole sound mixture that is captured by the microphones is taken into account to establish the sound localization map.
To simulate the filtering processes of the cochlea a digital signal should be segregated into an arbitrary number of narrowband signals. In subsequent stages, the software should be able to process the narrowband signals in real time. This requires a high degree of parallelism that is unsufficiently supported by currently available CPUs.
Knowledge Representation (KR) is a vibrant and exciting field in artificial intelligence. The endeavor rests on two fundamental ideas. First, to reason about the problem domain one must formalize it, perhaps in some logical formalism such as propositional logic or first-order logic. Second, for the representation to be useful one must be able to obtain reasonable and intuitive inferences in a timely fashion.
Unfortunately, propositional reasoning is intractable (Boolean reasoning is NP-COMPLETE) and first-order logic is undecidable. Thus, an important goal in the KR enterprise is to find a tradeoff be- tween the expressiveness of the representational language and the computational behavior of associated reasoning tasks. A main objective of this seminar is to discuss approaches bordering this tradeoff.