Monkey's Brain Can "Plug and Play" to Control Computer With Thought

Researchers show brain can learn to operate prosthetic device effortlessly

5 min read

Erico Guizzo is IEEE Spectrum's Digital Innovation Director.

21 July 2009—Our brains have a remarkable ability to assimilate motor skills that allow us to perform a host of tasks almost automatically—driving a car, riding a bicycle, typing on a keyboard. Now add another to the list: operating a computer using only thoughts.

Researchers at the University of California, Berkeley, have demonstrated how rhesus monkeys with electrodes implanted in their brains used their thoughts to control a computer cursor. Once the animals had mastered the task, they could repeat it proficiently day after day. The ability to repeat such feats is unprecedented in the field of neuroprosthetics. It reflects a major finding by the scientists: A monkey’s brain is able to develop a motor memory for controlling a virtual device in a manner similar to the way it creates such a memory for the animal’s body.

The new study, which should apply to humans, provides hope that physically disabled people may one day be able to operate advanced prosthetics in a natural, effortless way. Previous research in brain-machine interfaces, or BMIs, had already shown that monkeys and humans could use thought to control robots and computers in real time. But subjects weren’t able to retain the skills from one session to another, and the BMI system had to be recalibrated every session. In this new study, monkey do, monkey won’t forget.

”Every day we just put the monkeys to do the task, and they immediately recalled how to control the device,” says Jose Carmena, an IEEE senior member and professor of electrical engineering, cognitive science, and neuroscience who led the study. ”It was like ’plug and play.’”

Carmena and Karunesh Ganguly, a postdoc in Carmena’s lab, describe their work in a paper today in PLoS Biology.

The findings may ”change the whole way that people have thought about how to approach brain-machine interfaces,” says Lena Ting, a professor of biomedical engineering at Emory University and the Georgia Institute of Technology, in Atlanta. Previous research, she explains, tried to use the parts of the brain that operate real limbs to control an artificial one. The Berkeley study suggests that an artificial arm may not need to rely on brain signals related to the natural arm; the brain can assimilate the artificial device as if it were a new part of the body.

Krishna Shenoy, head of the Neural Prosthetic Systems Laboratory, at Stanford University, says the study is ”beautiful,” adding that the ”day-over-day learning is impressive and has never before been demonstrated so clearly.”

At the heart of the findings is the fact that the researchers used the same set of neurons throughout the three-week-long study. Keeping track of the same neurons is difficult, and previous experiments had relied on varying groups of neurons from day to day.

The Berkeley researchers implanted arrays of microelectrodes on the primary motor cortex, about 2 to 3 millimeters deep into the brain, tapping 75 to 100 neurons. The procedure was similar to that of other groups. The difference was that here the scientists carefully monitored the activity of these neurons using software that analyzed the waveform and timing of the signals. When they spotted a subset of 10 to 40 neurons that didn’t seem to change from day to day, they’d start the experiment; several times, one or more neurons would stop firing, and they’d have to restart from scratch. But the persistence paid off.

Monitoring the neurons, the scientists placed the monkey’s right arm inside a robotic exoskeleton that kept track of its movement. On a screen, the monkey saw a cursor whose position corresponded to the location of its hand. The task consisted of moving the cursor to the center of the screen, waiting for a signal, and then dragging the cursor onto one of eight targets in the periphery. Correct maneuvers were rewarded with sips of fruit juice. While the animal played, the researchers recorded two data sets—the brain signals and corresponding cursor positions.

The next step was to determine whether the animal could perform the same task using only its brain. To find out, the researchers needed first to create a decoder, a mathematical model that translates brain activity into cursor movement. The decoder is basically a set of equations that multiply the firing rates of the neurons by certain numbers, or weights. When the weights have the right values, you can plug the neuronal data into the equations and they’ll spill out the cursor position. To determine the right weights, the researchers had only to correlate the two data sets they’d recorded.

Next the scientists immobilized the monkey’s arm and fed the neuronal signals measured in real time into the decoder. Initially, the cursor moved spastically. But over a week of practice, the monkey’s performance climbed to nearly 100 percent and remained there for the next two weeks. For those later sessions, the monkey didn’t have to undergo any retraining—it promptly recalled how to skillfully maneuver the cursor.

The explanation lies in the behavior of the neurons. The researchers observed that the set of neurons they were monitoring would constantly fire while the animal was in its cage or even sleeping. But when the BMI session began, the neurons quickly locked into a pattern of activity—known as a cortical map—for controlling the cursor. (The researchers replicated the experiment with another monkey.)

The study is a big improvement over early experiments. In past studies, because researchers didn’t keep track of the same set of neurons, they had to reprogram the decoder every time to adapt to the new cortical activity. The changes also meant that the brain couldn’t form a cortical map of the prosthetic device. That limitation raised questions about whether paralyzed people would be able to use prosthetics with enough proficiency to make them really useful.

The Berkeley scientists showed that the cortical map can be stable over time and readily recalled. But they also demonstrated a third characteristic.

”These cortical maps are robust, resistant to interference,” says Carmena. ”When you learn to play tennis, that doesn’t make you forget how to drive a car.”

To demonstrate that, the researchers taught the monkey how to use a second decoder. To create the new decoder, they again recorded neuronal activity while the animal manually moved the cursor using the exoskeleton arm. The new data sets contained small fluctuations compared with the original ones, resulting in different weights for the equations. Using a new decoder is analogous to giving a different racket to a tennis player, who needs some practice to get accustomed to it.

As expected, the monkey’s performance was poor at first, but after just a few days it reached nearly 100 percent. What’s more, the researchers could now switch back and forth between the old and new decoders. All the animal saw was that the cursor changed color, and its brain would promptly produce the signals needed. This wasn’t a one-trick monkey, so to speak.

But perhaps more surprising, the researchers also tested a shuffled decoder. They took one of the existing decoders and randomly redistributed the weights in the equations among the neurons. This meant that the new decoder, unlike the early ones, had no relationship to actual movements of the monkey’s arm. It was a bit like giving a tennis player a hammer instead of a different racket.

What followed was a big surprise: After about just three days, the monkeys learned the new decoder. Just as before, practice allowed the neurons to develop a cortical map for the new task.

”It’s pretty remarkable that it could adapt to basically a corrupted decoder,” says Nicho Hatsopoulos, a professor of computational neuroscience at the University of Chicago. He says there’s a lot of focus in the field to build better and better decoders, but the new results suggest it may not be that important ”because the monkey will learn to improve its own performance.”

Carmena believes that the brain’s ability to store prosthetic motor memories is a key step toward practical BMI systems. Yet he emphasizes that it’s hard to predict when this technology will become available and is careful not to give patients false expectations. He says that the improvements needed include making the BMI systems less invasive and able to incorporate more than just visual feedback, with prosthetics that can provide users with tactile information, for example.

Still, he knows where he wants to go.

”I have this idea for a long-term goal,” he says. ”Can you tie your shoe in BMI mode?”

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions