Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes tha... more Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes that the future resembles the past: intelligent agents or systems (what we call 'intelligence') observe and act on the world, then use this experience to act on future experiences of the same kind. We call this 'retrospective learning'. For example, an intelligence may see a set of pictures of objects, along with their names, and learn to name them. A retrospective learning intelligence would merely be able to name more pictures of the same objects. We argue that this is not what true intelligence is about. In many real world problems, both NIs and AIs will have to learn for an uncertain future. Both must update their internal models to be useful for future tasks, such as naming fundamentally new objects and using these objects effectively in a new context or to achieve previously unencountered goals. This ability to learn for the future we call 'prospective learning'...
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
Consumer neuroscience is a rapidly emerging field, with the ability to detect consumer attitudes ... more Consumer neuroscience is a rapidly emerging field, with the ability to detect consumer attitudes and states via real-time passive technologies being highly valuable. While many studies have attempted to classify consumer emotions and perceived pleasantness of olfactory products, no known machine learning approach has yet been developed to directly predict consumer reward-based decision-making, which has greater behavioral relevance. In this proof-of-concept study, participants indicated their decision to have fragrance products repeated after fixed exposures to them. Single-trial power spectral density (PSD) and approximate entropy (ApEn) features were extracted from EEG signals recorded using a wearable device during fragrance exposures, and served as subject-independent inputs for 4 supervised learning algorithms (kNN, Linear-SVM, RBF-SVM, XGBoost). Using a cross-validation procedure, kNN yielded the best classification accuracy (77.6%) using both PSD and ApEn features. Acknowledging the challenging prospects of single-trial classification of high-order cognitive states especially with wearable EEG devices, this study is the first to demonstrate the viability of using sensor-level features towards practical objective prediction of consumer reward experience.
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
Olfactory hedonic perception involves complex interplay among an ensemble of neurocognitive syste... more Olfactory hedonic perception involves complex interplay among an ensemble of neurocognitive systems implicated in sensory, affective and reward processing. However, the mechanisms of these inter-system interactions have yet to be well-characterized. Here, we employ directed functional connectivity networks estimated from source-localized EEG to uncover how brain regions across the olfactory, emotion and reward systems integrate organically into cross-system communities. Using the integration coefficient, a graph theoretic measure, we quantified the effect of exposure to fragrance stimuli of different hedonic values (high vs low pleasantness levels) on inter-systems interactions. Our analysis focused on beta band activity (13-30 Hz), which is known to facilitate integration of cortical areas involved in sensory perception. Higher-pleasantness stimuli induced elevated integration for the reward system, but not for the emotion and olfactory systems. Furthermore, the nodes of reward system showed more outward connections to the emotion and olfactory systems than inward connections from the respective systems. These results suggest the centrality of the reward system-supported by beta oscillations-in actively coordinating multi-system interactivity to give rise to hedonic experiences during olfactory perception.
Unlike conventional frame-based sensors, event-based visual sensors output information through sp... more Unlike conventional frame-based sensors, event-based visual sensors output information through spikes at a high temporal resolution. By only encoding changes in pixel intensity, they showcase a low-power consuming, low-latency approach to visual information sensing. To use this information for higher sensory tasks like object recognition and tracking, an essential simplification step is the extraction and learning of features. An ideal feature descriptor must be robust to changes involving (i) local transformations and (ii) re-appearances of a local event pattern. To that end, we propose a novel spatiotemporal feature representation learning algorithm based on slow feature analysis (SFA). Using SFA, smoothly changing linear projections are learnt which are robust to local visual transformations. In order to determine if the features can learn to be invariant to various visual transformations, feature point tracking tasks are used for evaluation. Extensive experiments across two data...
In this paper, we address the challenging problem of action recognition, using event-based camera... more In this paper, we address the challenging problem of action recognition, using event-based cameras. To recognise most gestural actions, often higher temporal precision is required for sampling visual information. Actions are defined by motion, and therefore, when using event-based cameras it is often unnecessary to re-sample the entire scene. Neuromorphic, event-based cameras have presented an alternative to visual information acquisition by asynchronously time-encoding pixel intensity changes, through temporally precise spikes (10 micro-second resolution), making them well equipped for action recognition. However, other challenges exist, which are intrinsic to event-based imagers, such as higher signal-to-noise ratio, and a spatiotemporally sparse information. One option is to convert event-data into frames, but this could result in significant temporal precision loss. In this work we introduce spatiotemporal filtering in the spike-event domain, as an alternative way of channeling ...
Abnormal tumor hemodynamics are a critical determinant of a tumor’s microenvironment (TME), and p... more Abnormal tumor hemodynamics are a critical determinant of a tumor’s microenvironment (TME), and profoundly affect drug delivery, therapeutic efficacy and the emergence of drug and radio-resistance. Since multiple hemodynamic variables can simultaneously exhibit transient and spatiotemporally heterogeneous behavior, there is an exigent need for analysis tools that employ multiple variables to characterize the anomalous hemodynamics within the TME. To address this, we developed a new toolkit called HemoSYS for quantifying the hemodynamic landscape within angiogenic microenvironments. It employs multivariable time-series data such as in vivo tumor blood flow (BF), blood volume (BV) and intravascular oxygen saturation (Hbsat) acquired concurrently using a wide-field multicontrast optical imaging system. The HemoSYS toolkit consists of propagation, clustering, coupling, perturbation and Fourier analysis modules. We demonstrate the utility of each module for characterizing the in vivo hem...
Purpose Severe peripheral neuropathy is a common dose-limiting toxicity of taxane chemotherapy, w... more Purpose Severe peripheral neuropathy is a common dose-limiting toxicity of taxane chemotherapy, with no effective treatment. Frozen gloves have shown to reduce the severity of neuropathy in several studies but comes with the incidence of undesired side effects such as cold intolerance and frostbite in extreme cases. A device with thermoregulatory features which can safely deliver tolerable amounts of cooling while ensuring efficacy is required to overcome the deficiencies of frozen gloves. The role of continuous-flow cooling in prevention of neurotoxicity caused by paclitaxel has been previously described. This study hypothesized that cryocompression (addition of dynamic pressure to cooling) may allow for delivery of lower temperatures with similar tolerance and potentially improve efficacy. Method A proof-of-concept study was conducted in cancer patients receiving taxane chemotherapy. Each subject underwent four-limb cryocompression with each chemotherapy infusion (three hours) for...
You probably believe that a latent relationship between the brain and lower limbs exists and it v... more You probably believe that a latent relationship between the brain and lower limbs exists and it varies across different walking conditions (e.g., walking with or without an exoskeleton). Have you ever thought what the distributions of measured signals are? To address this question, we simultaneously collected electroencephalogram (EEG) and electromyogram (EMG) signals while healthy participants were conducting four overground walking conditions without any constraints (e.g., specific speed). The EEG results demonstrated that a wide range of frequencies from delta band to gamma band were involved in walking. The EEG power spectral density (PSD) was significantly different in sensorimotor and posterior parietal areas between exoskeleton-assisted walking and non-exoskeleton walking. The EMG PSD difference was predominantly observed in the theta band and the gastrocnemius lateralis muscle. EEG-EMG PSD correlations differed among walking conditions. The alpha and beta bands were primarily involved in consistently increasing EEG-EMG PSD correlations across the walking conditions, while the theta band was primarily involved in consistently decreasing correlations as observed in the EEG involvement. However, there is no dominant frequency band as observed in the EMG involvement. Channels located over the sensorimotor area were primarily involved in consistently decreasing EEG-EMG PSD correlations and the outer-ring channels were involved in the increasing EEG-EMG PSD correlations. Our study revealed the spectral and spatial distributions relevant to overground walking and deepened the understanding of EEG and EMG representations during locomotion, which may inform the development of a more human-compatible exoskeleton and its usage in motor rehabilitation. INDEX TERMS Correlation distribution, exoskeleton-assisted overground walking, electroencephalogram (EEG), electromyogram (EMG), naturalistic overground walking, power spectral density (PSD). The associate editor coordinating the review of this manuscript and approving it for publication was Zehong Cao .
Reliable neural interfaces between peripheral nerves and implantable devices are at the center of... more Reliable neural interfaces between peripheral nerves and implantable devices are at the center of advanced neural prosthetics and bioelectronic medicine. In this paper, selective sciatic nerve recording and stimulation were investigated using flexible split ring electrodes. The design enabled easy and reliable implantation of active electrodes on the sciatic nerve with minimal pressure on the nerve, but still provided good electrical contact with the nerve. Selective muscle stimulation was achieved by varying the stimulation configuration of the four active electrodes on the nerve to produce different muscle activation patterns. In addition, partially evoked neural signals were also recorded from the nerve using a transverse differential bipolar configuration, demonstrating differential recording capability. In addition, we showed that the quality of the neural signals recorded by the split ring electrode was higher than recordings from a commercial cuff electrode in terms of signal-to-noise ratio (SNR). Overall, our data shows that this flexible split ring electrode could be effective in neuromodulation in the future.
Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power... more Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power consuming imagers which encode visual change information in the form of spikes help reduce computational overhead and realize complex real-time systems; object recognition and pose-estimation to name a few. However, there exists a lack of algorithms in event-based vision aimed towards capturing invariance to transformations. In this work, we propose a methodology for recognizing objects invariant to their pose with the Dynamic Vision Sensor (DVS). A novel slow-ELM architecture is proposed which combines the effectiveness of Extreme Learning Machines and Slow Feature Analysis. The system can perform 10, 000 classifications per second, and achieves 1% classification error for 8 objects with views accumulated over 90 degrees of 2D pose.
IEEE transactions on bio-medical engineering, Jan 19, 2016
Electric fields (EF) of approx. 0.2 V/m have been shown to be sufficiently strong to both modulat... more Electric fields (EF) of approx. 0.2 V/m have been shown to be sufficiently strong to both modulate neuronal activity in the cerebral cortex and have measurable effects on cognitive performance. We hypothesized that the EF caused by the electrical activity of extracranial muscles during natural chewing may reach similar strength in the cerebral cortex and hence might act as an endogenous modality of brain stimulation. Here, we present first steps towards validating this hypothesis. Using a realistic volume conductor head model of an epilepsy patient having undergone intracranial electrode placement and utilizing simultaneous intracranial and extracranial electrical recordings during chewing, we derive predictions about the chewing-related cortical EF strength to be expected in healthy individuals. We find that in the region of the temporal poles, the expected EF strength may reach amplitudes in the order of 0.1 - 1 V/m. The cortical EF caused by natural chewing could be large enough ...
SpringerBriefs in Electrical and Computer Engineering, 2015
Although advanced prosthetic limbs, such as the modular prosthetic limb (MPL), are now capable of... more Although advanced prosthetic limbs, such as the modular prosthetic limb (MPL), are now capable of mimicking the dexterity of human limbs, brain-machine interfaces (BMIs) are not yet able to take full advantage of their capabilities. To improve BMI control of the MPL, we are developing a semi-autonomous system, the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system is designed to utilize novel control strategies including hybrid input (adding eye tracking to neural control), supervisory control (decoding high-level patient goals), and intelligent robotics (incorporating computer vision and route planning algorithms). Patients use eye gaze to indicate a desired object that has been recognized by computer vision. They then perform a desired action, such as reaching and grasping, which is decoded and carried out by the MPL via route planning algorithms. Here we present two patients, implanted with electrocorticography (ECoG) and depth electrodes, who controlled the HARMONIE system to perform reach and grasping tasks; in addition, one patient also used the HARMONIE system to simulate self-feeding. This work builds upon prior research to demonstrate the feasibility of using novel control strategies to enable patients to perform a wider variety of activities of daily living (ADLs).
IEEE transactions on bio-medical engineering, Jan 11, 2015
This paper demonstrates flexible epineural strip electrodes (FLESE) for recording from small nerv... more This paper demonstrates flexible epineural strip electrodes (FLESE) for recording from small nerves. Small strip-shaped FLESE enables to easily and closely stick on various sized nerves for less damage in a nerve and optimal recording quality. In addition, in order to enhance the neural interface, the gold electrode contacts were coated with carbon nanotubes (CNT), which reduced the impedance of the electrodes. We used the FLESEs to record electrically elicited nerve signals (compound neural action potentials, CNAP) from the sciatic nerve in rats. Bipolar and differential bipolar configurations for the recording were investigated to optimize the recording configuration of the FLESEs. The successful results from differential bipolar recordings showed that the total length of FLESEs could be further reduced maintaining the maximum recording ability, which would be beneficial for recording in very fine nerves. Our results demonstrate that new concept of FLESEs could play an important r...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015
This paper introduces a spiking hierarchical model for object recognition which utilizes the prec... more This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous Address Event Representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked
systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5%±3.5%) for a previously published four class card pip recognition task
and an accuracy of 84.9%±1.9% for a new more difficult 36 class character recognition task.
5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, 2014
Many upper limb amputees are faced with the difficult challenge of using a prosthesis that lacks ... more Many upper limb amputees are faced with the difficult challenge of using a prosthesis that lacks tactile sensing. State of the art research caliber prosthetic hands are often equipped with sophisticated sensors that provide valuable information regarding the prosthesis and its surrounding environment. Unfortunately, most commercial prosthetic hands do not contain any tactile sensing capabilities. In this paper, a textile based tactile sensor system was designed, built, and evaluated for use with upper limb prosthetic devices. Despite its simplicity, we demonstrate the ability of the sensors to determine object contact and perturbations due to slip during a grasping task with a prosthetic hand. This suggests the use of low-cost, customizable, textile sensors as part of a closedloop tactile feedback system for monitoring grasping forces specifically in an upper limb prosthetic device.
Background Virtual and augmented reality (AR) have become popular modalities for training myoelec... more Background Virtual and augmented reality (AR) have become popular modalities for training myoelectric prosthesis control with upper-limb amputees. While some systems have shown moderate success, it is unclear how well the complex motor skills learned in an AR simulation transfer to completing the same tasks in physical reality. Limb loading is a possible dimension of motor skill execution that is absent in current AR solutions that may help to increase skill transfer between the virtual and physical domains. Methods We implemented an immersive AR environment where individuals could operate a myoelectric virtual prosthesis to accomplish a variety of object relocation manipulations. Intact limb participants were separated into three groups, the load control (CGLD; $$N=4$$ N = 4 ), the AR control (CGAR; $$N=4$$ N = 4 ), and the experimental group (EG; $$N=4$$ N = 4 ). Both the CGAR and EG completed a 5-session prosthesis training protocol in AR while the CGLD performed simple muscle tr...
Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). ... more Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). Conventional eye-tracking system using single-model fixation duration possibly issues commands ignoring users' real expectation. In the current study, an eye-brain hybrid brain-computer interface (BCI) interaction system was introduced for intention detection through fusion of multi-modal eye-track and ERP (a measurement derived from EEG) features. Eye-track and EEG data were recorded from 64 healthy participants as they performed a 40-min customized free search task of a fixed target icon among 25 icons. The corresponding fixation duration of eye-tracking and ERP were extracted. Five previously-validated LDA-based classifiers (including RLDA, SWLDA, BLDA, SKLDA, and STDA) and the widely-used CNN method were adopted to verify the efficacy of feature fusion from both offline and pseudo-online analysis, and optimal approach was evaluated through modulating the training set and system response duration. Our study demonstrated that the input of multi-modal eye-track and ERP features achieved superior performance of intention detection in the single trial classification of active search task. And compared with singlemodel ERP feature, this new strategy also induced congruent accuracy across different classifiers. Moreover, in comparison with other classification methods, we found that the SKLDA exhibited the superior performance when fusing feature in offline test (ACC=0.8783, AUC=0.9004) and online simulation with different sample amount and duration length. In sum, the current study revealed a novel and effective approach for intention classification using eye-brain hybrid BCI, and further supported the real-life application of hands-free HCI in a more precise and stable manner.
The sense of touch plays a fundamental role in enabling us to interact with our surrounding envir... more The sense of touch plays a fundamental role in enabling us to interact with our surrounding environment. Indeed, the presence of tactile feedback in prostheses greatly assists amputees in doing daily tasks. In this line, the present study proposes an integration of artificial tactile and proprioception receptors for texture discrimination under varying scanning speeds. Here, we fabricated a soft biomimetic fingertip including an 8 × 8 array tactile sensor and a piezoelectric sensor to mimic Merkel, Meissner, and Pacinian mechanoreceptors in glabrous skin, respectively. A hydro-elastomer sensor was fabricated as an artificial proprioception sensor (muscle spindles) to assess the instantaneous speed of the biomimetic fingertip. In this study, we investigated the concept of the complex receptive field of RA-I and SA-I afferents for naturalistic textures. Next, to evaluate the synergy between the mechanoreceptors and muscle spindle afferents, ten naturalistic textures were manipulated b...
Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes tha... more Research on both natural intelligence (NI) and artificial intelligence (AI) generally assumes that the future resembles the past: intelligent agents or systems (what we call 'intelligence') observe and act on the world, then use this experience to act on future experiences of the same kind. We call this 'retrospective learning'. For example, an intelligence may see a set of pictures of objects, along with their names, and learn to name them. A retrospective learning intelligence would merely be able to name more pictures of the same objects. We argue that this is not what true intelligence is about. In many real world problems, both NIs and AIs will have to learn for an uncertain future. Both must update their internal models to be useful for future tasks, such as naming fundamentally new objects and using these objects effectively in a new context or to achieve previously unencountered goals. This ability to learn for the future we call 'prospective learning'...
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
Consumer neuroscience is a rapidly emerging field, with the ability to detect consumer attitudes ... more Consumer neuroscience is a rapidly emerging field, with the ability to detect consumer attitudes and states via real-time passive technologies being highly valuable. While many studies have attempted to classify consumer emotions and perceived pleasantness of olfactory products, no known machine learning approach has yet been developed to directly predict consumer reward-based decision-making, which has greater behavioral relevance. In this proof-of-concept study, participants indicated their decision to have fragrance products repeated after fixed exposures to them. Single-trial power spectral density (PSD) and approximate entropy (ApEn) features were extracted from EEG signals recorded using a wearable device during fragrance exposures, and served as subject-independent inputs for 4 supervised learning algorithms (kNN, Linear-SVM, RBF-SVM, XGBoost). Using a cross-validation procedure, kNN yielded the best classification accuracy (77.6%) using both PSD and ApEn features. Acknowledging the challenging prospects of single-trial classification of high-order cognitive states especially with wearable EEG devices, this study is the first to demonstrate the viability of using sensor-level features towards practical objective prediction of consumer reward experience.
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
Olfactory hedonic perception involves complex interplay among an ensemble of neurocognitive syste... more Olfactory hedonic perception involves complex interplay among an ensemble of neurocognitive systems implicated in sensory, affective and reward processing. However, the mechanisms of these inter-system interactions have yet to be well-characterized. Here, we employ directed functional connectivity networks estimated from source-localized EEG to uncover how brain regions across the olfactory, emotion and reward systems integrate organically into cross-system communities. Using the integration coefficient, a graph theoretic measure, we quantified the effect of exposure to fragrance stimuli of different hedonic values (high vs low pleasantness levels) on inter-systems interactions. Our analysis focused on beta band activity (13-30 Hz), which is known to facilitate integration of cortical areas involved in sensory perception. Higher-pleasantness stimuli induced elevated integration for the reward system, but not for the emotion and olfactory systems. Furthermore, the nodes of reward system showed more outward connections to the emotion and olfactory systems than inward connections from the respective systems. These results suggest the centrality of the reward system-supported by beta oscillations-in actively coordinating multi-system interactivity to give rise to hedonic experiences during olfactory perception.
Unlike conventional frame-based sensors, event-based visual sensors output information through sp... more Unlike conventional frame-based sensors, event-based visual sensors output information through spikes at a high temporal resolution. By only encoding changes in pixel intensity, they showcase a low-power consuming, low-latency approach to visual information sensing. To use this information for higher sensory tasks like object recognition and tracking, an essential simplification step is the extraction and learning of features. An ideal feature descriptor must be robust to changes involving (i) local transformations and (ii) re-appearances of a local event pattern. To that end, we propose a novel spatiotemporal feature representation learning algorithm based on slow feature analysis (SFA). Using SFA, smoothly changing linear projections are learnt which are robust to local visual transformations. In order to determine if the features can learn to be invariant to various visual transformations, feature point tracking tasks are used for evaluation. Extensive experiments across two data...
In this paper, we address the challenging problem of action recognition, using event-based camera... more In this paper, we address the challenging problem of action recognition, using event-based cameras. To recognise most gestural actions, often higher temporal precision is required for sampling visual information. Actions are defined by motion, and therefore, when using event-based cameras it is often unnecessary to re-sample the entire scene. Neuromorphic, event-based cameras have presented an alternative to visual information acquisition by asynchronously time-encoding pixel intensity changes, through temporally precise spikes (10 micro-second resolution), making them well equipped for action recognition. However, other challenges exist, which are intrinsic to event-based imagers, such as higher signal-to-noise ratio, and a spatiotemporally sparse information. One option is to convert event-data into frames, but this could result in significant temporal precision loss. In this work we introduce spatiotemporal filtering in the spike-event domain, as an alternative way of channeling ...
Abnormal tumor hemodynamics are a critical determinant of a tumor’s microenvironment (TME), and p... more Abnormal tumor hemodynamics are a critical determinant of a tumor’s microenvironment (TME), and profoundly affect drug delivery, therapeutic efficacy and the emergence of drug and radio-resistance. Since multiple hemodynamic variables can simultaneously exhibit transient and spatiotemporally heterogeneous behavior, there is an exigent need for analysis tools that employ multiple variables to characterize the anomalous hemodynamics within the TME. To address this, we developed a new toolkit called HemoSYS for quantifying the hemodynamic landscape within angiogenic microenvironments. It employs multivariable time-series data such as in vivo tumor blood flow (BF), blood volume (BV) and intravascular oxygen saturation (Hbsat) acquired concurrently using a wide-field multicontrast optical imaging system. The HemoSYS toolkit consists of propagation, clustering, coupling, perturbation and Fourier analysis modules. We demonstrate the utility of each module for characterizing the in vivo hem...
Purpose Severe peripheral neuropathy is a common dose-limiting toxicity of taxane chemotherapy, w... more Purpose Severe peripheral neuropathy is a common dose-limiting toxicity of taxane chemotherapy, with no effective treatment. Frozen gloves have shown to reduce the severity of neuropathy in several studies but comes with the incidence of undesired side effects such as cold intolerance and frostbite in extreme cases. A device with thermoregulatory features which can safely deliver tolerable amounts of cooling while ensuring efficacy is required to overcome the deficiencies of frozen gloves. The role of continuous-flow cooling in prevention of neurotoxicity caused by paclitaxel has been previously described. This study hypothesized that cryocompression (addition of dynamic pressure to cooling) may allow for delivery of lower temperatures with similar tolerance and potentially improve efficacy. Method A proof-of-concept study was conducted in cancer patients receiving taxane chemotherapy. Each subject underwent four-limb cryocompression with each chemotherapy infusion (three hours) for...
You probably believe that a latent relationship between the brain and lower limbs exists and it v... more You probably believe that a latent relationship between the brain and lower limbs exists and it varies across different walking conditions (e.g., walking with or without an exoskeleton). Have you ever thought what the distributions of measured signals are? To address this question, we simultaneously collected electroencephalogram (EEG) and electromyogram (EMG) signals while healthy participants were conducting four overground walking conditions without any constraints (e.g., specific speed). The EEG results demonstrated that a wide range of frequencies from delta band to gamma band were involved in walking. The EEG power spectral density (PSD) was significantly different in sensorimotor and posterior parietal areas between exoskeleton-assisted walking and non-exoskeleton walking. The EMG PSD difference was predominantly observed in the theta band and the gastrocnemius lateralis muscle. EEG-EMG PSD correlations differed among walking conditions. The alpha and beta bands were primarily involved in consistently increasing EEG-EMG PSD correlations across the walking conditions, while the theta band was primarily involved in consistently decreasing correlations as observed in the EEG involvement. However, there is no dominant frequency band as observed in the EMG involvement. Channels located over the sensorimotor area were primarily involved in consistently decreasing EEG-EMG PSD correlations and the outer-ring channels were involved in the increasing EEG-EMG PSD correlations. Our study revealed the spectral and spatial distributions relevant to overground walking and deepened the understanding of EEG and EMG representations during locomotion, which may inform the development of a more human-compatible exoskeleton and its usage in motor rehabilitation. INDEX TERMS Correlation distribution, exoskeleton-assisted overground walking, electroencephalogram (EEG), electromyogram (EMG), naturalistic overground walking, power spectral density (PSD). The associate editor coordinating the review of this manuscript and approving it for publication was Zehong Cao .
Reliable neural interfaces between peripheral nerves and implantable devices are at the center of... more Reliable neural interfaces between peripheral nerves and implantable devices are at the center of advanced neural prosthetics and bioelectronic medicine. In this paper, selective sciatic nerve recording and stimulation were investigated using flexible split ring electrodes. The design enabled easy and reliable implantation of active electrodes on the sciatic nerve with minimal pressure on the nerve, but still provided good electrical contact with the nerve. Selective muscle stimulation was achieved by varying the stimulation configuration of the four active electrodes on the nerve to produce different muscle activation patterns. In addition, partially evoked neural signals were also recorded from the nerve using a transverse differential bipolar configuration, demonstrating differential recording capability. In addition, we showed that the quality of the neural signals recorded by the split ring electrode was higher than recordings from a commercial cuff electrode in terms of signal-to-noise ratio (SNR). Overall, our data shows that this flexible split ring electrode could be effective in neuromodulation in the future.
Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power... more Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power consuming imagers which encode visual change information in the form of spikes help reduce computational overhead and realize complex real-time systems; object recognition and pose-estimation to name a few. However, there exists a lack of algorithms in event-based vision aimed towards capturing invariance to transformations. In this work, we propose a methodology for recognizing objects invariant to their pose with the Dynamic Vision Sensor (DVS). A novel slow-ELM architecture is proposed which combines the effectiveness of Extreme Learning Machines and Slow Feature Analysis. The system can perform 10, 000 classifications per second, and achieves 1% classification error for 8 objects with views accumulated over 90 degrees of 2D pose.
IEEE transactions on bio-medical engineering, Jan 19, 2016
Electric fields (EF) of approx. 0.2 V/m have been shown to be sufficiently strong to both modulat... more Electric fields (EF) of approx. 0.2 V/m have been shown to be sufficiently strong to both modulate neuronal activity in the cerebral cortex and have measurable effects on cognitive performance. We hypothesized that the EF caused by the electrical activity of extracranial muscles during natural chewing may reach similar strength in the cerebral cortex and hence might act as an endogenous modality of brain stimulation. Here, we present first steps towards validating this hypothesis. Using a realistic volume conductor head model of an epilepsy patient having undergone intracranial electrode placement and utilizing simultaneous intracranial and extracranial electrical recordings during chewing, we derive predictions about the chewing-related cortical EF strength to be expected in healthy individuals. We find that in the region of the temporal poles, the expected EF strength may reach amplitudes in the order of 0.1 - 1 V/m. The cortical EF caused by natural chewing could be large enough ...
SpringerBriefs in Electrical and Computer Engineering, 2015
Although advanced prosthetic limbs, such as the modular prosthetic limb (MPL), are now capable of... more Although advanced prosthetic limbs, such as the modular prosthetic limb (MPL), are now capable of mimicking the dexterity of human limbs, brain-machine interfaces (BMIs) are not yet able to take full advantage of their capabilities. To improve BMI control of the MPL, we are developing a semi-autonomous system, the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system is designed to utilize novel control strategies including hybrid input (adding eye tracking to neural control), supervisory control (decoding high-level patient goals), and intelligent robotics (incorporating computer vision and route planning algorithms). Patients use eye gaze to indicate a desired object that has been recognized by computer vision. They then perform a desired action, such as reaching and grasping, which is decoded and carried out by the MPL via route planning algorithms. Here we present two patients, implanted with electrocorticography (ECoG) and depth electrodes, who controlled the HARMONIE system to perform reach and grasping tasks; in addition, one patient also used the HARMONIE system to simulate self-feeding. This work builds upon prior research to demonstrate the feasibility of using novel control strategies to enable patients to perform a wider variety of activities of daily living (ADLs).
IEEE transactions on bio-medical engineering, Jan 11, 2015
This paper demonstrates flexible epineural strip electrodes (FLESE) for recording from small nerv... more This paper demonstrates flexible epineural strip electrodes (FLESE) for recording from small nerves. Small strip-shaped FLESE enables to easily and closely stick on various sized nerves for less damage in a nerve and optimal recording quality. In addition, in order to enhance the neural interface, the gold electrode contacts were coated with carbon nanotubes (CNT), which reduced the impedance of the electrodes. We used the FLESEs to record electrically elicited nerve signals (compound neural action potentials, CNAP) from the sciatic nerve in rats. Bipolar and differential bipolar configurations for the recording were investigated to optimize the recording configuration of the FLESEs. The successful results from differential bipolar recordings showed that the total length of FLESEs could be further reduced maintaining the maximum recording ability, which would be beneficial for recording in very fine nerves. Our results demonstrate that new concept of FLESEs could play an important r...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015
This paper introduces a spiking hierarchical model for object recognition which utilizes the prec... more This paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous Address Event Representation (AER) vision sensors. The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems. Freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation. We show not only how timing can be used in object recognition, but also how it can in fact simplify computation. Specifically, we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition. This approach to visual computation represents a major paradigm shift from conventional clocked
systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5%±3.5%) for a previously published four class card pip recognition task
and an accuracy of 84.9%±1.9% for a new more difficult 36 class character recognition task.
5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, 2014
Many upper limb amputees are faced with the difficult challenge of using a prosthesis that lacks ... more Many upper limb amputees are faced with the difficult challenge of using a prosthesis that lacks tactile sensing. State of the art research caliber prosthetic hands are often equipped with sophisticated sensors that provide valuable information regarding the prosthesis and its surrounding environment. Unfortunately, most commercial prosthetic hands do not contain any tactile sensing capabilities. In this paper, a textile based tactile sensor system was designed, built, and evaluated for use with upper limb prosthetic devices. Despite its simplicity, we demonstrate the ability of the sensors to determine object contact and perturbations due to slip during a grasping task with a prosthetic hand. This suggests the use of low-cost, customizable, textile sensors as part of a closedloop tactile feedback system for monitoring grasping forces specifically in an upper limb prosthetic device.
Background Virtual and augmented reality (AR) have become popular modalities for training myoelec... more Background Virtual and augmented reality (AR) have become popular modalities for training myoelectric prosthesis control with upper-limb amputees. While some systems have shown moderate success, it is unclear how well the complex motor skills learned in an AR simulation transfer to completing the same tasks in physical reality. Limb loading is a possible dimension of motor skill execution that is absent in current AR solutions that may help to increase skill transfer between the virtual and physical domains. Methods We implemented an immersive AR environment where individuals could operate a myoelectric virtual prosthesis to accomplish a variety of object relocation manipulations. Intact limb participants were separated into three groups, the load control (CGLD; $$N=4$$ N = 4 ), the AR control (CGAR; $$N=4$$ N = 4 ), and the experimental group (EG; $$N=4$$ N = 4 ). Both the CGAR and EG completed a 5-session prosthesis training protocol in AR while the CGLD performed simple muscle tr...
Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). ... more Intention decoding is an indispensable procedure in hands-free human-computer interaction (HCI). Conventional eye-tracking system using single-model fixation duration possibly issues commands ignoring users' real expectation. In the current study, an eye-brain hybrid brain-computer interface (BCI) interaction system was introduced for intention detection through fusion of multi-modal eye-track and ERP (a measurement derived from EEG) features. Eye-track and EEG data were recorded from 64 healthy participants as they performed a 40-min customized free search task of a fixed target icon among 25 icons. The corresponding fixation duration of eye-tracking and ERP were extracted. Five previously-validated LDA-based classifiers (including RLDA, SWLDA, BLDA, SKLDA, and STDA) and the widely-used CNN method were adopted to verify the efficacy of feature fusion from both offline and pseudo-online analysis, and optimal approach was evaluated through modulating the training set and system response duration. Our study demonstrated that the input of multi-modal eye-track and ERP features achieved superior performance of intention detection in the single trial classification of active search task. And compared with singlemodel ERP feature, this new strategy also induced congruent accuracy across different classifiers. Moreover, in comparison with other classification methods, we found that the SKLDA exhibited the superior performance when fusing feature in offline test (ACC=0.8783, AUC=0.9004) and online simulation with different sample amount and duration length. In sum, the current study revealed a novel and effective approach for intention classification using eye-brain hybrid BCI, and further supported the real-life application of hands-free HCI in a more precise and stable manner.
The sense of touch plays a fundamental role in enabling us to interact with our surrounding envir... more The sense of touch plays a fundamental role in enabling us to interact with our surrounding environment. Indeed, the presence of tactile feedback in prostheses greatly assists amputees in doing daily tasks. In this line, the present study proposes an integration of artificial tactile and proprioception receptors for texture discrimination under varying scanning speeds. Here, we fabricated a soft biomimetic fingertip including an 8 × 8 array tactile sensor and a piezoelectric sensor to mimic Merkel, Meissner, and Pacinian mechanoreceptors in glabrous skin, respectively. A hydro-elastomer sensor was fabricated as an artificial proprioception sensor (muscle spindles) to assess the instantaneous speed of the biomimetic fingertip. In this study, we investigated the concept of the complex receptive field of RA-I and SA-I afferents for naturalistic textures. Next, to evaluate the synergy between the mechanoreceptors and muscle spindle afferents, ten naturalistic textures were manipulated b...
Uploads
Papers by Nitish Thakor
systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5%±3.5%) for a previously published four class card pip recognition task
and an accuracy of 84.9%±1.9% for a new more difficult 36 class character recognition task.
systems and can find application in other sensory modalities and computational tasks. We showcase effectiveness of the approach by achieving the highest reported accuracy to date (97.5%±3.5%) for a previously published four class card pip recognition task
and an accuracy of 84.9%±1.9% for a new more difficult 36 class character recognition task.