'Deepfaking the mind' could improve brain-computer interfaces for people with disabilities

brain
Credit: CC0 Public Domain

Researchers at the USC Viterbi School of Engineering are using generative adversarial networks (GANs)—technology best known for creating deepfake videos and photorealistic human faces—to improve brain-computer interfaces for people with disabilities.

In a paper published in Nature Biomedical Engineering, the team successfully taught an AI to generate synthetic brain activity data. The data, specifically called spike trains, can be fed into to improve the usability of (BCI).

BCI systems work by analyzing a person's brain signals and translating that into commands, allowing the user to control like computer cursors using only their thoughts. These devices can improve quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome—when a person is fully conscious but unable to move or communicate.

Various forms of BCI are already available, from caps that measure brain signals to devices implanted in brain tissues. New use cases are being identified all the time, from neurorehabilitation to treating depression. But despite all of this promise, it has proved challenging to make these systems fast and robust enough for the real world.

Specifically, to make sense of their inputs, BCIs need huge amounts of neural data and long periods of training, calibration and learning.

"Getting enough data for the algorithms that power BCIs can be difficult, expensive, or even impossible if paralyzed individuals are not able to produce sufficiently robust brain signals," said Laurent Itti, a computer science professor and study co-author.

Another obstacle: the technology is user-specific and has to be trained from scratch for each person.

Generating synthetic neurological data

What if, instead, you could create synthetic neurological data—artificially computer-generated data—that could "stand in" for data obtained from the real world?

Enter generative adversarial networks. Known for creating "deep fakes," GANs can create a virtually unlimited number of new, similar images by running through a trial-and-error process.

Lead author Shixian Wen, a Ph.D. student advised by Itti, wondered if GANs could also create training data for BCIs by generating synthetic neurological data indistinguishable from the real thing.

In an experiment described in the paper, the researchers trained a deep-learning spike synthesizer with one session of data recorded from a monkey reaching for an object. Then, they used the synthesizer to generate large amounts of similar—albeit fake—neural data.

The team then combined the synthesized data with small amounts of new real data—either from the same monkey on a different day, or from a different monkey—to train a BCI. This approach got the system up and running much faster than current standard methods. In fact, the researchers found that GAN-synthesized neural data improved a BCI's overall training speed by up to 20 times.

"Less than a minute's worth of real data combined with the synthetic data works as well as 20 minutes of real data," said Wen.

"It is the first time we've seen AI generate the recipe for thought or movement via the creation of synthetic spike trains. This research is a critical step towards making BCIs more suitable for use."

Additionally, after training on one experimental session, the system rapidly adapted to new sessions, or subjects, using limited additional neural data.

"That's the big innovation here—creating fake spike trains that look just like they come from this person as they imagine doing different motions, then also using this data to assist with learning on the next person," said Itti.

Beyond BCIs, GAN-generated synthetic data could lead to breakthroughs in other data-hungry areas of artificial intelligence by speeding up training and improving performance.

"When a company is ready to start commercializing a robotic skeleton, robotic arm or speech synthesis system, they should look at this method, because it might help them with accelerating the training and retraining," said Itti. "As for using GAN to improve brain-computer interfaces, I think this is only the beginning."

More information: Shixian Wen et al, Rapid adaptation of brain–computer interfaces to new neuronal ensembles or participants via generative modelling, Nature Biomedical Engineering (2021). DOI: 10.1038/s41551-021-00811-z

Journal information: Nature Biomedical Engineering
Citation: 'Deepfaking the mind' could improve brain-computer interfaces for people with disabilities (2021, November 19) retrieved 12 December 2024 from https://medicalxpress.com/news/2021-11-deepfaking-mind-brain-computer-interfaces-people.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Machine learning, meet human emotions: How to help a computer monitor your mental state

150 shares

Feedback to editors