Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, Analog Integrated Circuits and Signal Processing
…
7 pages
1 file
In this paper we present an analog circuit that determines the direction of incoming sound using two microphones. The circuit is inspired by biology and uses two silicon cochlea to determine the azimuthal angle of the sound source with respect to the axis of the two microphones using the time difference between the two microphone signals. A new algorithm, adapted to an analog VLSI implementation, is presented together with simulation and measurement results.
2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009
A neuromorphic sound localisation system is proposed. It employs two microphones and a pair of silicon cochleae with address event interface for front-end processing. This allows subsequent processing to be implemented with spike-based algorithms. The system is adaptive and supports online learning. Its localisation capability was tested with white noise and pure tone stimuli, with an average error of around 3° in the -45° to 45° range.
1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation
We propose a localization model which uses monaural spectral cues for localizing a sound source in a l-D plane. A neuromorphic microphone is constructed t o implement this model. We use t h e term "neuromorphic" because t h e microphone's operating principles take advantage of biologically-based monaural cues. By concentrating on monaural cues, we hope t o better understand human sound localization and also build low-cost stereo capabilities into a single microphone. T h e Head Related Transfer Function (HRTF) plays a critical role for human monaural localization since t h e shape of t h e external ear (pinna) spectrally shapes the sound differently for each sound source direction. Using HRTFs, humans can perceive t h e difference between front and back and t h e sound source's different elevation positions using only a single ear. The neuromorphic microphone relies on a specially shaped reflecting structure t h a t allows echo-time processing t o localize t h e sound. Since our recorded signal is composed of the direct sound and its echo, t h e sound is a simplified version of actual HRTF recordings which a r e composed of t h e direct sound and its reflections from t h e external ear, head, shoulder, and torso. T h e recorded signal from our special microphone is first processed using a gamma filter. T h e gamma filter generalizes t h e standard transversal filter by adding t h e ability t o choose a n optimal timescale. Our studies have shown t h a t t h e g a m m a filter solutions require on t h e order of five parameters while t h e more typical FIR filter solutions require hundreds of parameters. A multilayer perceptron neural network is then used t o learn t h e elevation angle of t h e sound-allowing t h e microphone t o correctly localize sounds. With t h e successful building of t h e special microphone, a full hardware version of a neuromorphic microphone is possible.
Neural Information Processing Systems, 1998
We describe the first single microphone sound localization system and its inspiration from theories of human monaural sound localization. Reflections and diffractions caused by the external ear (pinna) allow humans to estimate sound source elevations using only one ear. Our single microphone localization model relies on a specially shaped reflecting structure that serves the role of the pinna. Specially designed analog VLSI circuitry uses echo-time processing to localize the sound. A CMOS integrated circuit has been designed, fabricated, and successfully demonstrated on actual sounds.
Applied Sciences, 2021
We present a biologically inspired sound localisation system for reverberant environments using the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The system exploits a CAR-FAC pair to pre-process binaural signals that travel through the inherent delay line of the cascade structures, as each filter acts as a delay unit. Following the filtering, each cochlear channel is cross-correlated with all the channels of the other cochlea using a quantised instantaneous correlation function to form a 2-D instantaneous correlation matrix (correlogram). The correlogram contains both interaural time difference and spectral information. The generated correlograms are analysed using a regression neural network for localisation. We investigate the effect of the CAR-FAC nonlinearity on the system performance by comparing it with a CAR only version. To verify that the CAR/CAR-FAC and the quantised instantaneous correlation provide a suitable basis with which to...
2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), 2020
Auditory sound source localization with only two ears is a computationally intensive problem. Nevertheless, some animals show impressive performance solving it. Neurons in the auditory system of these animals are specialized to exploit spatial information from binaural sound signals. A set of neurons located in the lateral superior olivary (LSO) complex responds to interaural level differences. Here, we present a spike-based sound source localization model inspired by neurons in the LSO that computes the level difference of incoming spike signals by integrating excitatory and inhibitory inputs for localizing sound sources. The model is implemented on the IBM TrueNorth neurosynaptic system to achieve real-time compute performance. We introduce system components, like spectro-temporal smoothing and weighted sum units and explain how they are implemented on the TrueNorth chip. Correct behavior for synthetic inputs is demonstrated. Finally, test scenarios with recorded natural sounds indicate that the system reliably computes the interaural level difference of perceived sound sources.
Sound source localization is used in various applications such as industrial noise-control, speech detection in mobile phones, speech enhancement in hearing aids and many more. Newest video conferencing setups use sound source localization. The position of a speaker is detected from the difference in the audio waves received by a microphone array. After detection the camera focuses onto the location of the speaker. The human brain is also able to detect the location of a speaker from auditory signals. It uses, among other cues, the difference in amplitude and arrival time of the sound wave at the two ears, called interaural level and time difference. However, the substrate and computational primitives of our brain are different from classical digital computing. Due to its low power consumption of around 20 Watts and its performance in real time the human brain has become a great source of inspiration for emerging technologies. One of these technologies is neuromorphic hardware whic...
Sensors
Conventional processing of sensory input often relies on uniform sampling leading to redundant information and unnecessary resource consumption throughout the entire processing pipeline. Neuromorphic computing challenges these conventions by mimicking biology and employing distributed event-based hardware. Based on the task of lateral auditory sound source localization (SSL), we propose a generic approach to map biologically inspired neural networks to neuromorphic hardware. First, we model the neural mechanisms of SSL based on the interaural level difference (ILD). Afterward, we identify generic computational motifs within the model and transform them into spike-based components. A hardware-specific step then implements them on neuromorphic hardware. We exemplify our approach by mapping the neural SSL model onto two platforms, namely the IBM TrueNorth Neurosynaptic System and SpiNNaker. Both implementations have been tested on synthetic and real-world data in terms of neural tuning...
IEE Proceedings - Circuits, Devices and Systems, 1999
The silicon cochlea offers an interesting approach to implementing a hardware system for use in spatial localisation of a sound source. For such an application, knowledge of the different ways in which the cochlea has been implemented in silicon, together with the techniques necessary for sound localisation is essential. This review looks at how the auditory pathway has been modelled in silicon and how these models correspond to biological counterparts. The various filtering techniques that have been employed to implement the silicon cochlea are described and contrasted. Then, sound localisation techniques are outlined and VLSI implementations adapting the silicon cochlea for sound localisation applications are presented.
Nature Electronics
Many speech processing systems struggle in conditions with low signal-to-noise ratios and in changing acoustic environments. Adaptation at the transduction level with integrated signal processing could help to address this; in human hearing, transduction and signal processing are integrated and can be adaptively tuned for noisy conditions. Here we report a microelectromechanical cochlea as a bio-inspired acoustic sensor with integrated signal processing functionality. Real-time feedback is used to tune the sensing and processing properties, and dynamic switching between linear and nonlinear characteristics improves the detection of signals in noisy conditions, increases the sensor dynamic range and enables adaptation to changing acoustic environments. The transition to nonlinear behaviour is attributed to a Hopf bifurcation and we experimentally validate its dependence on sensor and feedback parameters. We also show that output-signal coupling between two coupled sensors can increas...
Neural Computation, 1989
The barn owl accurately localizes sounds in the azimuthal plane, using interaural time difference as a cue. The time-coding pathway in the owl's brainstem encodes a neural map of azimuth, by processing interaural timing information. We have built a silicon model of the time-coding pathway of the owl. The integrated circuit models the structure as well as the function of the pathway; most subcircuits in the chip have an anatomical correlate. The chip computes all outputs in real time, using analog, continuous-time processing.
Cognitive Processing, 2024
Asia Rising: A Handbook of History and International Relations in East, South and Southeast Asia , 2024
Penerbit BRIN eBooks, 2023
Fondasi : Jurnal Teknik Sipil
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
Theologica Xaveriana
École d'été Exploitation des ressources et mise en valeur du territoire au Ier millénaire de notre ère: nouvelles méthodes en archéobotanique, archéométrie, modélisation (2018 ; Poreč etc.). Programme & abstractsʹ book, 2018
International Journal of Basic & Clinical Pharmacology
Revista De La Facultad De Derecho De Madrid, 1950
Biomedicines, 2021
Networks, 2013
IEEE Photonics Technology Letters, 2010
Acta Academiae Artium Vilnensis, 2021
Dix-septième siècle, 2007