Augusta
Oct 22, 2015
The Power of Two May Help Explain Brain Design
AUGUSTA, Ga. – At its most basic level, the brain is about the power of two, says Medical College of Georgia neuroscientist Dr. Joe Z. Tsien.
He postulates in his “Theory of Connectivity” that, not unlike high school, where a human clique includes your closest friends, a neural clique is typically comprised of a couple of similar neurons. But unlike most transient teenage cliques, neural cliques provide a basic, prewired framework for how neurons connect and function lifelong.
“The brain is not a blank sheet. This complex wiring system that ends up being our brain, starts with these cliques,” said Tsien, co-director of the Brain & Behavior Discovery Institute at Georgia Regents University and Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology.
“We think the brain has these combinatorial connections among brain cells, and through these connections, comes the knowledge and flexibility to deal with whatever comes in and goes out.”
Tsien, a memory researcher who created the smart mouse Doogie 15 years ago, is among the myriad of neuroscientists who have been dissecting and analyzing human and animal brains for centuries, trying to figure out how they work and what happens when they don’t.
“This is the traditional approach, to disassemble, much like kids try to figure out a new toy by taking it apart,” said Tsien. He decided this time to take a cue from theoretical physicists, who approach the vast universe and solar system with a combination of what they know and what they can see and perform a “thought experiment.”
“If we understand how the brain was designed, it reveals a lot about how the brain works,” Tsien said. That knowledge should enable better development and testing of therapeutics and provide a better blueprint for how different maladies impact brain function, he said. The right equation also becomes a sort of yardstick for measuring the brain’s capacity and maybe even designing better artificial intelligence.
His result is an organizing principle for the brain and cliques are only a beginning. Tsien’s “Theory of Connectivity” and resulting equation – published in the journal Trends in Neurosciences – also says that neural cliques congregate to form functional connectivity motifs, or FCMs. The more complex the issue at hand, the more cliques join the FCM.
Tsien uses some fundamental quests as an example. “Let’s say, a simple organism needs to deal with only two possibilities – like whether to find food or a mate. The FCM to deal with this dilemma would require, at a minimum, three cliques. One clique represents food, another a mate, and a third enables you to look at both as good things that I need,” Tsien said. Complicating the situation by whether what the creature really should focus on is avoiding predators or poison along the way, requires more neural cliques and a bigger FCM.
“Whether it’s three cells or a billion cells, we think, at the fundamental level, this is the building block,” Tsien said. “This equation gives you a way to wire the brain cells in such a way to turn seemingly infinite possibilities into organized knowledge.”
His equation, n=2ⁱ-1, defines how many cliques are needed for an FCM. N is the number of neural cliques connected in different possible ways; 2 means the clique is on or off; i is the information you are dealing with; and -1 is just part of the math that enables you to account for all possibilities, Tsien said.
The brain’s bottom line mission in Tsien’s view is to use its relatively limited neurons to maximize the capacity to cope with essentially infinite possibilities. And, there is ample evidence that normal neural networking is orderly. In fact, order appears a necessity in a human brain with 100 billion neurons and where each principal neuron can have 30,000 connections. Without order, people can have developmental delays and disabilities, conditions such as schizophrenia, even die. Still, the prevalent view is that the start is random and learning brings some kind of order to the connecting, Tsien said.
The fact that all goes amazingly well for most of us, begs the question: How is the brain’s groundwork laid to enable organization and connectivity on such a massive scale?
“Everyone in the field agrees that the magic should be in how the brain cells are connected,” said Dr. Phillip Wang, MCG neuroscientist and Tsien’s colleague. The number of connections expand and evolve constantly not only through evolution, but also as we meet new people and learn new information.
Computers use the efficient binary coding scheme i=2ⁿ, where n equals the number of transistors and i is information. While some aficionados might disagree, Tsien said computers operate very differently than brains and can’t actually discover knowledge, only find, organize and take in what’s already out there.
“The brain needs to be able to see dynamic relationships and discover common patterns, not just differences between pictures of a Chihuahua and a German shepherd. Intelligence must lie in the brain’s wiring logic that turns random possibilities into some kind of organized pattern and generalized knowledge.”
He notes that the Hebbian theory, detailed in the 1949 book, “The Organization of Behavior” by the late Canadian psychologist Dr. Donald O. Hebb, basically says the neuron sending the original message fires first and repeatedly to properly stimulate the next neuron. Consequently, the connections between them, called synapses, strengthen. The theory is considered a fundamental of how we learn, Tsien said. But how cells organize and connect to make that possible has remained elusive.
Tsien’s said his power-of-two-based mathematical explanation now needs vigorous testing.
The research was funded by the National Institutes of Health and the Georgia Research Alliance.