nthropomimetic robots sense, behave, interact, and feel like humans. By this definition, they req... more nthropomimetic robots sense, behave, interact, and feel like humans. By this definition, they require human-like physical hardware and actuation but also brain-like control and sensing. The most self-evident realization to meet those requirements would be a human-like musculoskeletal robot with a brain-like neural controller. While both musculoskeletal robotic hardware and neural control software have existed for decades, a scalable approach that could be used to build and control an anthropomimetic human-scale robot has not yet been demonstrated. Combining Myorobotics, a framework for musculoskeletal robot development, with SpiNNaker, a neuromorphic computing platform, we present the proof of principle of a system that can scale to dozens of neurally controlled, physically compliant joints. At its core, it implements a closed-loop cerebellar model that provides real-time, low-level, neural control at minimal power consumption and maximal Musculoskeletal Robots
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
Clustering is crucial for many computer vision applications such as robust tracking, object detec... more Clustering is crucial for many computer vision applications such as robust tracking, object detection and segmentation. This work presents a real-time clustering technique that takes advantage of the unique properties of eventbased vision sensors. Since event-based sensors trigger events only when the intensity changes, the data is sparse, with low redundancy. Thus, our approach redefines the well-known mean-shift clustering method using asynchronous events instead of conventional frames. The potential of our approach is demonstrated in a multi-target tracking application using Kalman filters to smooth the trajectories. We evaluated our method on an existing dataset with patterns of different shapes and speeds, and a new dataset that we collected. The sensor was attached to the Baxter robot in an eye-in-hand setup monitoring real-world objects in an action manipulation task. Clustering accuracy achieved an F-measure of 0.95, reducing the computational cost by 88% compared to the frame-based method. The average error for tracking was 2.5 pixels and the clustering achieved a consistent number of clusters along time.
Time synchronization faces an increasing performance demand nowadays. This is particularly common... more Time synchronization faces an increasing performance demand nowadays. This is particularly common in different segments, such as new scientific infrastructures, power grid, telecommunications or fifth-generation (5G) wireless networks. All of them share the need for a time synchronization technology offering a very low synchronization error along large networks. The new version of the IEEE-1588-2019 protocol offers a considerable performance improvement thanks to the high accuracy profile based on white rabbit (WR). The original WR implementation guarantees a synchronization accuracy below 1 ns for a small number of cascaded nodes. This contribution evaluates the performance degradation when a considerable number of devices are deployed in cascaded configurations. In order to improve the performance, different delay measurement implementations have been evaluated in a series of experiments. The results prove the considerable benefits of our implementation against the current WR implementation.
The cerebellum is one of the parts of the brain which has aroused more curiosity amongst neurosci... more The cerebellum is one of the parts of the brain which has aroused more curiosity amongst neuroscientists. Indeed, several forms of plasticity mechanisms have been reported at several sites of the cerebellar model circuitry, thus including plasticity mechanisms not just at parallel fibers (a well-accepted plasticity site) but also at synaptic inputs of deep cerebellar nuclei (DCN) (from mossy fibers (MFs) and Purkinje cells (PCs) ). The Marr and Albus model already hypothesized that parallel fiber (PFs) Purkinje cell synapses presented both long-term potentiation (LTP) [5,6] and long-term depression (LTD) plasticity so as to correlate the activity at parallel fibers with the incoming error signal through climbing fibers. Nevertheless, in subsequent studies, it has been demonstrated that many sites in the cerebellum show traces of plasticity . But the way in which those distributed plasticity mechanisms may improve the operational capabilities of the cerebellum is still an open issue.
Modeling and simulating the neural structures which make up our central neural system is instrume... more Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event-and time-driven techniques under increasing levels of neural complexity.
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design b... more Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of ...
The projection of 3D scenarios onto 2D surfaces produces distortion on the resulting images that ... more The projection of 3D scenarios onto 2D surfaces produces distortion on the resulting images that affects the accuracy of low-level motion primitives. Independently of the motion detection algorithm used, post-processing stages that use motion data are dominated by this distortion artefact. Therefore we need to devise a way of reducing the distortion effect in order to improve the post-processing capabilities of a vision system based on motion cues. In this paper we adopt a space-variant mapping strategy, and describe a computing architecture that finely pipelines all the processing operations to achieve high performance reliable processing. We validate the computing architecture in the framework of a real-world application, a vision-based system for assisting overtaking manoeuvres using motion information to segment approaching vehicles. The overtaking scene from the rear-view mirror is distorted due to perspective, therefore a space-variant mapping strategy to correct perspective distortion arterfaces becomes of high interest to arrive at reliable motion cues.
Proceedings of the 6th Wseas International Conference on Signal Processing Computational Geometry Artificial Vision, 2006
In this work we present a vision system that includes circuits to compute different modalities in... more In this work we present a vision system that includes circuits to compute different modalities in parallel, such as local image features (magnitude, phase and orientation), motion and stereo vision. This becomes possible by efficiently using the massive parallel computing resources of FPGA devices. The paper briefly describes the complete system and discusses the hardware consumption and performance of each visual modality. Finally, the work highlights that huge amount of data produced by such a system and the necessity of on-chip integration mechanisms.
La estimación de flujo óptico es una técnica ampliamente utilizada en la visión por computador. C... more La estimación de flujo óptico es una técnica ampliamente utilizada en la visión por computador. Cada vez más se necesitan y se estudian algoritmos que sean lo más robustos posible de cara a posibles aplicaciones prácticas en entornos reales no controlados. De entre las distintas técnicas, destacan por su robustez y versatilidad los métodos de computación basados sobre la fase de la señal de luminancia, que como mayor inconveniente tienen la alta carga computacional de los mismos. En este artículo se presenta un sistema de cálculo de flujo óptico en FPGA basado en la fase de la señal que se beneficia de tales propiedades. Además el sistema trabaja con múltiples orientaciones y escalas espaciales (modelo multiescala), lo que le permite una mayor precisión y un mayor rango de velocidades detectables. El gran nivel de paralelismo utilizado en los dispositivos FPGA y el diseño con etapas de cauce de grano fino permiten alcanzar el procesamiento en tiempo real en imágenes de resolución VGA.
The cerebellar microcircuit has been the work bench for theoretical and computational modeling si... more The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal nonlinear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closedloop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.
This paper presents a computer aided diagnosis system for skin lesions. Diverse parameters or fea... more This paper presents a computer aided diagnosis system for skin lesions. Diverse parameters or features extracted from fluorescence images are evaluated for cancer diagnosis. The selection of parameters has a significant effect on the cost and accuracy of an automated classifier. The genetic algorithm (GA) performs parameters selection using the classifier of the K-nearest neighbours (KNN). We evaluate the classification performance of each subset of parameters selected by the genetic algorithm. This classification approach is modular and enables easy inclusion and exclusion of parameters. This facilitates the evaluation of their significance related to the skin cancer diagnosis. We have implemented this parameter evaluation scheme adopting a strategy that automatically optimizes the K-nearest neighbours classifier and indicates which features are more relevant for the diagnosis problem.
Computer Vision and Image Understanding, Dec 1, 2008
Optical-flow computation is a well-known technique and there are important fields in which the ap... more Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-...
Proceedings of the 7th International Work Conference on Artificial and Natural Neural Networks Part Ii Artificial Neural Nets Problem Solving Methods, Jun 5, 2009
Abstract. In this oaoer we consider one of the bie challenees when . . :on;lrualn@ rnodu:lr bc.n:... more Abstract. In this oaoer we consider one of the bie challenees when . . :on;lrualn@ rnodu:lr bc.n:\lor :rch:aurcs ior inc. conlrol oir::,ls).slctrl;. th: IS. lho\\. 1s dcadc. \\un:I? iooJulc sr OIIIIOO 01. IIIS UI S [aka :s111roI siIIc ... 3clu3ton In order lo mplc.mc.nt IIIC br\hilvor th: robot ...
Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinj... more Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range).
Computational simulations of the retina have led to valuable insights about the biophysics of its... more Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of...
A computing platform is described for simulating arbitrary networks of spiking neurons in real ti... more A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
nthropomimetic robots sense, behave, interact, and feel like humans. By this definition, they req... more nthropomimetic robots sense, behave, interact, and feel like humans. By this definition, they require human-like physical hardware and actuation but also brain-like control and sensing. The most self-evident realization to meet those requirements would be a human-like musculoskeletal robot with a brain-like neural controller. While both musculoskeletal robotic hardware and neural control software have existed for decades, a scalable approach that could be used to build and control an anthropomimetic human-scale robot has not yet been demonstrated. Combining Myorobotics, a framework for musculoskeletal robot development, with SpiNNaker, a neuromorphic computing platform, we present the proof of principle of a system that can scale to dozens of neurally controlled, physically compliant joints. At its core, it implements a closed-loop cerebellar model that provides real-time, low-level, neural control at minimal power consumption and maximal Musculoskeletal Robots
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
Clustering is crucial for many computer vision applications such as robust tracking, object detec... more Clustering is crucial for many computer vision applications such as robust tracking, object detection and segmentation. This work presents a real-time clustering technique that takes advantage of the unique properties of eventbased vision sensors. Since event-based sensors trigger events only when the intensity changes, the data is sparse, with low redundancy. Thus, our approach redefines the well-known mean-shift clustering method using asynchronous events instead of conventional frames. The potential of our approach is demonstrated in a multi-target tracking application using Kalman filters to smooth the trajectories. We evaluated our method on an existing dataset with patterns of different shapes and speeds, and a new dataset that we collected. The sensor was attached to the Baxter robot in an eye-in-hand setup monitoring real-world objects in an action manipulation task. Clustering accuracy achieved an F-measure of 0.95, reducing the computational cost by 88% compared to the frame-based method. The average error for tracking was 2.5 pixels and the clustering achieved a consistent number of clusters along time.
Time synchronization faces an increasing performance demand nowadays. This is particularly common... more Time synchronization faces an increasing performance demand nowadays. This is particularly common in different segments, such as new scientific infrastructures, power grid, telecommunications or fifth-generation (5G) wireless networks. All of them share the need for a time synchronization technology offering a very low synchronization error along large networks. The new version of the IEEE-1588-2019 protocol offers a considerable performance improvement thanks to the high accuracy profile based on white rabbit (WR). The original WR implementation guarantees a synchronization accuracy below 1 ns for a small number of cascaded nodes. This contribution evaluates the performance degradation when a considerable number of devices are deployed in cascaded configurations. In order to improve the performance, different delay measurement implementations have been evaluated in a series of experiments. The results prove the considerable benefits of our implementation against the current WR implementation.
The cerebellum is one of the parts of the brain which has aroused more curiosity amongst neurosci... more The cerebellum is one of the parts of the brain which has aroused more curiosity amongst neuroscientists. Indeed, several forms of plasticity mechanisms have been reported at several sites of the cerebellar model circuitry, thus including plasticity mechanisms not just at parallel fibers (a well-accepted plasticity site) but also at synaptic inputs of deep cerebellar nuclei (DCN) (from mossy fibers (MFs) and Purkinje cells (PCs) ). The Marr and Albus model already hypothesized that parallel fiber (PFs) Purkinje cell synapses presented both long-term potentiation (LTP) [5,6] and long-term depression (LTD) plasticity so as to correlate the activity at parallel fibers with the incoming error signal through climbing fibers. Nevertheless, in subsequent studies, it has been demonstrated that many sites in the cerebellum show traces of plasticity . But the way in which those distributed plasticity mechanisms may improve the operational capabilities of the cerebellum is still an open issue.
Modeling and simulating the neural structures which make up our central neural system is instrume... more Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event-and time-driven techniques under increasing levels of neural complexity.
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design b... more Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of ...
The projection of 3D scenarios onto 2D surfaces produces distortion on the resulting images that ... more The projection of 3D scenarios onto 2D surfaces produces distortion on the resulting images that affects the accuracy of low-level motion primitives. Independently of the motion detection algorithm used, post-processing stages that use motion data are dominated by this distortion artefact. Therefore we need to devise a way of reducing the distortion effect in order to improve the post-processing capabilities of a vision system based on motion cues. In this paper we adopt a space-variant mapping strategy, and describe a computing architecture that finely pipelines all the processing operations to achieve high performance reliable processing. We validate the computing architecture in the framework of a real-world application, a vision-based system for assisting overtaking manoeuvres using motion information to segment approaching vehicles. The overtaking scene from the rear-view mirror is distorted due to perspective, therefore a space-variant mapping strategy to correct perspective distortion arterfaces becomes of high interest to arrive at reliable motion cues.
Proceedings of the 6th Wseas International Conference on Signal Processing Computational Geometry Artificial Vision, 2006
In this work we present a vision system that includes circuits to compute different modalities in... more In this work we present a vision system that includes circuits to compute different modalities in parallel, such as local image features (magnitude, phase and orientation), motion and stereo vision. This becomes possible by efficiently using the massive parallel computing resources of FPGA devices. The paper briefly describes the complete system and discusses the hardware consumption and performance of each visual modality. Finally, the work highlights that huge amount of data produced by such a system and the necessity of on-chip integration mechanisms.
La estimación de flujo óptico es una técnica ampliamente utilizada en la visión por computador. C... more La estimación de flujo óptico es una técnica ampliamente utilizada en la visión por computador. Cada vez más se necesitan y se estudian algoritmos que sean lo más robustos posible de cara a posibles aplicaciones prácticas en entornos reales no controlados. De entre las distintas técnicas, destacan por su robustez y versatilidad los métodos de computación basados sobre la fase de la señal de luminancia, que como mayor inconveniente tienen la alta carga computacional de los mismos. En este artículo se presenta un sistema de cálculo de flujo óptico en FPGA basado en la fase de la señal que se beneficia de tales propiedades. Además el sistema trabaja con múltiples orientaciones y escalas espaciales (modelo multiescala), lo que le permite una mayor precisión y un mayor rango de velocidades detectables. El gran nivel de paralelismo utilizado en los dispositivos FPGA y el diseño con etapas de cauce de grano fino permiten alcanzar el procesamiento en tiempo real en imágenes de resolución VGA.
The cerebellar microcircuit has been the work bench for theoretical and computational modeling si... more The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate "realistic" models using a bottom-up strategy accounted for both detailed connectivity and neuronal nonlinear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closedloop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.
This paper presents a computer aided diagnosis system for skin lesions. Diverse parameters or fea... more This paper presents a computer aided diagnosis system for skin lesions. Diverse parameters or features extracted from fluorescence images are evaluated for cancer diagnosis. The selection of parameters has a significant effect on the cost and accuracy of an automated classifier. The genetic algorithm (GA) performs parameters selection using the classifier of the K-nearest neighbours (KNN). We evaluate the classification performance of each subset of parameters selected by the genetic algorithm. This classification approach is modular and enables easy inclusion and exclusion of parameters. This facilitates the evaluation of their significance related to the skin cancer diagnosis. We have implemented this parameter evaluation scheme adopting a strategy that automatically optimizes the K-nearest neighbours classifier and indicates which features are more relevant for the diagnosis problem.
Computer Vision and Image Understanding, Dec 1, 2008
Optical-flow computation is a well-known technique and there are important fields in which the ap... more Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-...
Proceedings of the 7th International Work Conference on Artificial and Natural Neural Networks Part Ii Artificial Neural Nets Problem Solving Methods, Jun 5, 2009
Abstract. In this oaoer we consider one of the bie challenees when . . :on;lrualn@ rnodu:lr bc.n:... more Abstract. In this oaoer we consider one of the bie challenees when . . :on;lrualn@ rnodu:lr bc.n:\lor :rch:aurcs ior inc. conlrol oir::,ls).slctrl;. th: IS. lho\\. 1s dcadc. \\un:I? iooJulc sr OIIIIOO 01. IIIS UI S [aka :s111roI siIIc ... 3clu3ton In order lo mplc.mc.nt IIIC br\hilvor th: robot ...
Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinj... more Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range).
Computational simulations of the retina have led to valuable insights about the biophysics of its... more Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of...
A computing platform is described for simulating arbitrary networks of spiking neurons in real ti... more A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
Uploads
Papers by Eduardo Ros