But that could soon change: Researchers at MIT and the Georgia Institute of Technology have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and speed than a human experimenter.
The new automated process eliminates the need for months of training and provides long-sought information about living cells’ activities. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells.
The project is a collaboration between the labs of Ed Boyden, associate professor of biological engineering and brain and cognitive sciences at MIT, and Craig Forest, an assistant professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech.
“Our team has been interdisciplinary from the beginning, and this has enabled us to bring the principles of precision machine design to bear upon the study of the living brain,” Forest says. His graduate student, Suhasa Kodandaramaiah, spent the past two years as a visiting student at MIT, and is the lead author of the study, which appears in the May 6 issue of Nature Methods.
The method could be particularly useful in studying brain disorders such as schizophrenia, Parkinson’s disease, autism and epilepsy, Boyden says. “In all these cases, a molecular description of a cell that is integrated with [its] electrical and circuit properties … has remained elusive,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. “If we could really describe how diseases change molecules in specific cells within the living brain, it might enable better drug targets to be found.”
Automation
Kodandaramaiah, Boyden and Forest set out to automate a 30-year-old technique known as whole-cell patch clamping, which involves bringing a tiny hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane to record the electrical activity within the cell. This skill usually takes a graduate student or postdoc several months to learn.
Kodandaramaiah spent about four months learning the manual patch-clamp technique, giving him an appreciation for its difficulty. “When I got reasonably good at it, I could sense that even though it is an art form, it can be reduced to a set of stereotyped tasks and decisions that could be executed by a robot,” he says.
To that end, Kodandaramaiah and his colleagues built a robotic arm that lowers a glass pipette into the brain of an anesthetized mouse with micrometer accuracy. As it moves, the pipette monitors a property called electrical impedance — a measure of how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.
The pipette takes two-micrometer steps, measuring impedance 10 times per second. Once it detects a cell, it can stop instantly, preventing it from poking through the membrane. “This is something a robot can do that a human can’t,” Boyden says.
Once the pipette finds a cell, it applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity. The robotic system can detect cells with 90 percent accuracy, and establish a connection with the detected cells about 40 percent of the time.
The researchers also showed that their method can be used to determine the shape of the cell by injecting a dye; they are now working on extracting a cell’s contents to read its genetic profile.
Development of the new technology was funded primarily by the National Institutes of Health, the National Science Foundation and the MIT Media Lab.
New era for robotics
The researchers recently created a startup company, Neuromatic Devices, to commercialize the device.
The researchers are now working on scaling up the number of electrodes so they can record from multiple neurons at a time, potentially allowing them to determine how different parts of the brain are connected.
They are also working with collaborators to start classifying the thousands of types of neurons found in the brain. This “parts list” for the brain would identify neurons not only by their shape — which is the most common means of classification — but also by their electrical activity and genetic profile.
“If you really want to know what a neuron is, you can look at the shape, and you can look at how it fires. Then, if you pull out the genetic information, you can really know what’s going on,” Forest says. “Now you know everything. That’s the whole picture.”
Boyden says he believes this is just the beginning of using robotics in neuroscience to study living animals. A robot like this could potentially be used to infuse drugs at targeted points in the brain, or to deliver gene therapy vectors. He hopes it will also inspire neuroscientists to pursue other kinds of robotic automation — such as in optogenetics, the use of light to perturb targeted neural circuits and determine the causal role that neurons play in brain functions.
Neuroscience is one of the few areas of biology in which robots have yet to make a big impact, Boyden says. “The genome project was done by humans and a giant set of robots that would do all the genome sequencing. In directed evolution or in synthetic biology, robots do a lot of the molecular biology,” he says. “In other parts of biology, robots are essential.”
Other co-authors include MIT grad student Giovanni Talei Franzesi and MIT postdoc Brian Y. Chow.
Research News & Publications OfficeGeorgia Institute of Technology75 Fifth Street, N.W., Suite 314Atlanta, Georgia 30308 USA
Writer: Anne Trafton, MIT News
MEDIA CONTACT
Register for reporter access to contact detailsArticle Multimedia
CITATIONS
Nature Methods (May 6)