laitimes

AI Study丨 Progress in brain-like research

author:Chinese Society of Artificial Intelligence

Text/ Huang Tiejun

The report mainly has four aspects: first, why brain-like, second, how brain-like, third, progress in life simulation, and fourth, progress in brain-like vision.

1. Why brain-like

Brain-like is one of the most important research directions of artificial intelligence, and it is impossible for artificial intelligence to complete independent research without brain-like. Today's deep learning and reinforcement learning are inseparable from traditional symbolism, connectionism, and behaviorism, especially connectionism. Deep learning uses deep neural networks, and the training carrier behind the success of reinforcement learning today is also neural networks, so the core position and fundamental role of neural networks in artificial intelligence are undoubtedly convinced.

The so-called brain-like is the original intention of connectionism, and the goal of connectionism is, in the final analysis, what kind of neural network to make, and brain-like is to make a fine artificial neural network like a biological brain. In recent years, everyone in China thinks that brain-like is a new research direction, but in fact, it is the "main theme" of artificial intelligence development.

However, in the development of artificial intelligence, the neural networks in the field of artificial intelligence are much simpler than biological neural networks, from the MP neuron model in 1940 (still in use today) to the deep neural network that won the Turing Award in 2018, after many years of iteration, good results have also been achieved, such as deep learning and reinforcement learning have achieved major breakthroughs. On the one hand, deep learning has made major breakthroughs in face recognition and other aspects, but it still has some fundamental flaws in doing these tasks. For example, now artificial intelligence can distinguish hundreds of millions of faces, but if you add a confrontation, it will not recognize two people, and if you add a confrontation picture, it will not be able to detect people, and this kind of low-level mistake will not be committed. This problem can be modified and improved, but it cannot be exhausted, and new defects will always be found.

Why does this happen? There are many patches or explanations, and in the final analysis, there are still huge differences between today's artificial intelligence neural networks and biological neural networks, although there are hundreds, thousands of layers, and hundreds of millions of connections, but the structural complexity, neuronal complexity, network architecture, and mechanism complexity of today's trained artificial neural networks are still "insignificant" compared with biological neural networks. What is shown here is the complexity of the visual cortex, not to mention the complexity of neurons, but only the number of neurons and the complex structural relationships, which are far from being comparable to today's artificial neural networks. Artificial neural networks are far more complex than biological systems, and it's no surprise that there are huge differences in functionality.

There were many clear opinions on what kind of neural network to do and to what extent in the early days of AI development, and even before the word "artificial intelligence" appeared. In retrospect, these arguments are far from being realized. For example, the first, von Neumann and Turing, pioneers in the field of computing. Von Neumann said in 1948 that the connection mode of the brain visual system itself may be the simplest logical expression or definition of the (vision) principle, that is, to do a computer vision system, a machine vision system, if you want to be comparable to the visual function of the brain, to find a structural model, the simplest is the brain itself, and it is impossible to do it as simple and functional. The simplest complete model of the brain is the brain itself, and it is impossible to simplify it formally, using logic, algorithms, etc. Of course, simulating the brain does not necessarily have to be achieved with proteins and organic matter, and other materials and devices can also be used, but the model should be as complex and complete as the brain.

In his first article in the field of artificial intelligence, "Computing Machines and Intelligence", published in 1950, Turing mentioned that "a truly intelligent machine must have the ability to learn, and the method of making such a machine is to first build a machine that simulates the brain of childhood, and then educate and train." "However, this is not the case with artificial intelligence today, and it is all about starting with a basic multi-layer network, rather than a complex neural network with high accuracy. Therefore, it is not an exaggeration to say that artificial intelligence is still in its infancy, compared with the brain, the difference is too big.

Second, how to be brain-like

The brain is certainly an important object for reference, but how to resemble the brain? This question is very controversial, and different people have different views.

Before 2013, I had similar ideas about artificial intelligence and brain-like to most people, thinking that to make a brain-like intelligent system, we must first solve the problem of intelligence science and figure out what intelligence is. But around 2014 my thinking changed radically and I found that there were limitations to it. At the end of the day, is science the basis of technology? Is it possible to solve a scientific problem before we can solve a technical problem?

Example 1 China invented the compass in the 11th century or even earlier, and electromagnetism in the 19th century. To explain why the compass is a compass, of course, electromagnetism and earth science must be used, and when the compass was invented, there was no electromagnetism. But can't the compass be invented without scientific principles?

Example 2 The airplane was invented in 1903, and aerodynamics was proposed between 1939 and 1946 and is still in the process of being perfected. It is clear that the plane did not fly into the sky with aerodynamic guidance, but the Wright brothers made a device, tried to improve it, and went to the sky.

Example 3 Deep learning has been developing for more than 10 years, and in 2012, deep learning was already hot. Later, explainability became more and more popular, and many people said that explainability is the most important, but the problem is that the interpretability of deep learning is still in the process of exploration, so should we wait until the interpretable principle is studied before inventing deep learning?

History is like this, deep learning works very well as a network system, and training with big data can achieve a lot of practical value, which is a technological invention, and this technological invention does not depend on our proposal of the information processing principles and mathematical models behind neural networks. While the explanation of the principle is very important and a major scientific problem, it is after the invention of deep learning. In short, it is not a matter of first coming up with a set of scientific theories and then instructing technologists to invent a neural network.

One of the major turning points in my thinking at the time was that we should design an AI system, rather than waiting for the principles to be revealed.

Von Kármán, the theorist of aerodynamics, said: "Scientists discover the world that exists, and engineers create the world of the future." "Artificial intelligence is first and foremost a technology, creating systems that are getting smarter and smarter. Scientists want to discover the principles behind things, and they can't rigidly think that scientific discovery is a prerequisite for technological invention. The example I just gave is that there is a technological invention first, followed by a scientific discovery, and of course there is an example of a scientific discovery followed by a technological invention, and the two are mutually reinforcing and intertwined processes. In 2013, my way of thinking was influenced by a lot of education, including a lot of education now, and people think that science is solved first and then technology. All projects, including the NSFC, have to write scientific questions, so if we ask the Wright brothers and the people who invented the compass to write scientific questions, how do they write them? This one-way way of thinking is very problematic.

In my opinion, brain science (including neuroscience and cognitive science) is to explore the mechanism of biological intelligence, which belongs to the category of natural science. Intelligence science is the science that studies the laws behind intelligent phenomena, and intelligence science in a broad sense includes brain science, while intelligence science in a narrow sense refers to technical science with machine intelligence as the object. The brain is intelligent, so there are laws behind it. What are the laws behind the intelligence of artificial intelligence systems? For example, the interpretability of deep learning is intelligence science, and if it is classified to a certain extent, according to Qian Xuesen's definition, it is technical science. Technical science is also science, brain science is science, there is the universe in nature, there are brains, there are all kinds of complex systems, some people think that the brain is the most complex system, brain intelligence is the most complex phenomenon, so it is reasonable to call brain science "the last frontier of natural science". Intelligence science is not only the study of the brain, but also the laws behind artificial intelligence, and whether people can make systems that are more complex, stranger, or more different than brain intelligence? We cannot say that the brain is unique, that there can be no other intelligent system besides the brain, and that there can be intelligent systems beyond it. In this sense, intelligent science opens up an infinite frontier for scientific exploration.

From this point of view, it is clear that artificial intelligence is a technology, this technology constructs many complex systems, this system has an application purpose, and at the same time provides more and more new objects for scientific research, from this point of view, intelligent science is a consequence of technological progress rather than a premise. Therefore, at least intelligent science and artificial intelligence should be regarded as a process of mutual influence, and the theory should not be high to guide practice, which is not conducive to the development of both sides.

I realized this and wrote a number of articles, one of which was titled "Can Humans Create "Super Brains"?". Today, computers are already very strong in computing and other aspects, and if we want to make a more intelligent system, we must build a more complex machine - a "super brain". How to do it? According to the current development of technology, when will we be able to build such a machine? Construct a system that approaches the brain at the level of neural networks, and surpasses the brain in some respects, such a physical basis and carrier. At that time, I expected to build a super brain in 30 years, that is, about 2045, when humans may build such a machine, which happens to be the 100th anniversary of the invention of computers, the invention of computers in 1946, the invention of the first computer, about 2045, 2046 to create a new generation of artificial intelligence The physical basis of the machine, its ability as a carrier may be equivalent to the brain. In short, what we need to learn from the brain is not the principle of the brain in the first place, and that is a long-term thing. What can be done now is to analyze the brain in the field of neuroscience, neuroscience has given a lot of basic network architecture and signal processing models, can we reproduce the structure and signal processing process of the biological brain in electronic form, this is what I call "brain-like computer/brain-like computer/electronic brain". These von Neumann and Turing have mentioned this, and here it is only about when it will be possible to make it under modern technical conditions.

After making this machine, training it will definitely produce a function, and it is difficult to say how complex this function will be. Because these functions appear on the machine, it is relatively convenient for us to understand the principle behind it, and it is a man-made device, which is much simpler than doing brain experiments. If we can find intelligent mechanisms by exploring on this platform, it may be helpful to understand the human brain and accelerate the revelation of brain mysteries. Of course, with progress in understanding the mysteries of the brain, the two-way cycle can continue. Therefore, the brain-like road should not only focus on the principles of the brain, but also start from a relatively clear neural network structure to construct iterations, so that science and technology promote each other and accelerate this process.

Many teachers have read the book "25 Possibilities of AI", which has a chapter that gives the three laws of artificial intelligence. The first law, called Ashby's law, holds that any effective control system must be as complex as the system it controls, i.e., a simple system cannot control a complex system. The second law is that the simplest complete model of an organism is the organism itself (von Neumann), and there is no simpler model that achieves the same function. The third law is that any system that is simple enough to understand is not complex enough to operate intelligently, and any system that is complex enough to operate intelligently is so complex that it is impossible to understand, that is, can we really wait for the day when the brain fully understands? If we believe that it is a truly intelligent system, we may never understand it. Our revelation of the mysteries of the brain may be an endless process of approximation, but even if there is no way to understand it, it is entirely possible to construct it without understanding intelligence. This is my main point of view, that is, you can make intelligence without understanding it, which is what the direction of artificial intelligence should do.

3. Progress in life simulation

Because of the basic methodology, over the past six or seven years, our team has gone back to the biological nervous system to develop a hardware and software to construct the system, which we hope will be able to make after 2045.

The brain is a nervous system and a neural network, and we expect to use the neural network of the human brain at birth as the starting point for training artificial intelligence, provided that the neural network is parsed first, and then it can be reconstructed and simulated.

As mentioned earlier, the complexity of brain neural networks is high, and there are at least two classical works in it, the first being the Hodgkin-Huxley equation, made in 1952 and won the Nobel Prize in 1963, which is a mathematical-physical model for the processing of signals from expressed neurons recognized by theoretical neuroscience. The second is the cable theory in 1959, biological neurons have complex structures, how many signal sources from the end of the dendritic end to the cell body, and then sent out through the axon, here each signal change process has a model - "fine neural model", it is very different from today's artificial neural network "point model", the complexity of the realization of "fine neural model" is also very high.

The Hodgkin-Huxley equation is a set of differential equations, because to be finely modeled, the complex neuronal structure needs to be divided into many compartments, and each compartment has a set of equations to describe it, and its computational complexity is much more complex than the neuronal model we call today "N inputs, 1 output", and the computational complexity is many orders of magnitude higher, which is why not many people do this work. But if we really want to be brain-like, how can we achieve the function of the brain without such a fine level? Therefore, we must go back to the fine neural network structure and reproduce the signal processing process of neurons through ion channel simulation.

There are many people doing research in this area, and there is a lot of representative work. The LIF model for spiking neural networks is the simplest spiking neuron model, but it is still a point neuron. All the way up to fine models like the Hodgkin-Huxley equations, there are a lot of people who have done the work of simplifying and optimizing in between. In short, the goal is to be able to accurately implement models such as the Hodgkin-Huxley equations.

From above, neuromorphic computing or spiking neural network chips are still far from fine simulation. In order to solve this problem, we chose the software simulation route, because software can be very close to mathematical calculations, but it consumes more computer resources, but it can be closer to biological processes. The most famous of these are the algorithms proposed by Yale University's Hines in 1994 and the NEURON simulation platform, which was developed specifically for fine neural network modeling, and today more than 80% of the systems for fine brain modeling are based on this platform.

Around 2015, when we used the NEURON simulation platform, we found that the computational efficiency was too low to do large-scale neural networks, so the first thing to do was to improve the computational efficiency with the same accuracy, which is what our computer science major is good at. One of my students has been doing this for 7 years, researching and comparing all the algorithm acceleration literature, and found that the efficiency improvement has been very limited in the past ten years, so he has been working on optimization algorithms during his Ph.D., and now he has made a big breakthrough.

In general, to discover the parallelism of fine neuronal computation, and it is a tree-like structure, parallelism is natural, and the key is how much optimal parallelism can be done, which is the first problem we want to solve - proposing an optimal parallel algorithm, which is a job. The second task is to fully exploit and optimize the cache, shared memory and other features of the GPU, and use the optimization algorithm to implement it on the GPU.

After doing these two jobs, we can do large-scale fine neural network simulation, we use the network construction method proposed by Hjorth et al. of the Karolinska Institute in 2020 to finely model the mouse striatum. In contrast, the Peking University "Shengke No. 1" supercomputing system used in the experiment contains four NVIDIA Tesla V100 GPUs per GPU node, while the CPU platform uses two Xeon Gold 614216-core CPUs per CPU node. On this platform, a large-scale striatum network of 10,000 fine neurons was modeled with a total of 142,007,820 chambers. The comparison results show that the simulation time is 1 842 s on 30 CPU nodes and 2 243 s on 25 CPUs, and 2 168 s on 1 GPU for parallel simulation with NEURON. That is, this acceleration is equivalent to the ability of 1 GPU node to 25~30 CPU nodes, and the algorithm efficiency is increased by more than 10 times.

Another work is the high-precision life simulation system that KLCII has been working on since 2018. This platform is now basically complete, but it is not yet a mature set of software, and many of the software in it is developed by itself, and some are common in the field. At present, the whole system has been established, and the data from data collection, such as anatomy, electrophysiology, bioinformatics, etc., can be processed and processed after importing, so that the structure of neurons can be presented. Because each neuron has to be simulated, the electrophysiology of each neuron is adjusted according to the H-H equation, a neuronal model is called, the electrophysiological process of living organisms is approximated, and many neurons are built into a fine neural network and then run. Some models are also made on the platform, and the models are made just to prove the capabilities of this platform. Models with life science value should also be collaborated with life scientists.

Now I'm working on a simple nematode model and a primate vision model. This is the first phase of the nematode simulation presented at KLCII 2022. C. elegans is the simplest model animal with only 302 neurons, and even such a simple nervous system simulation is particularly demanding in terms of workload and complexity. In 1986, the relationship between 302 neuronal connectomes of nematodes has been revealed, and three Nobel Prizes have been related to nematodes, but no high-precision simulated nematodes have been constructed until now.

The work we do is to first simulate the nematode nervous system with as high precision as possible, and secondly, to have a "live" nematode, so we need to construct a three-dimensional fluid, real-time, dynamic physical simulation environment for it, so that it can move and train in this environment, so that the intelligence can be expressed and the full closed-loop intelligent training can be realized.

The data used to simulate nematodes is derived from various data from electrophysiology measured in the life sciences over the past few years, and then turned into computational models. This work is only one stage, and the 302 neurons are complete, and the connections are complete. High precision means that the 106 neurons related to olfactory and motor circuits are high-precision, have electrophysiological data, and use self-made tools to train these neurons, and their signal processing dynamics are close to electrophysiology. The most complex neurons have 2 313 chambers, the simplest has 10 chambers, and the supported ion channel classes are 14.

The physical simulation of the nematode's body and the environment in which the nematode lives is another aspect of the work. With these two things, a nematode can be put into an environment to train it, and all its perception and control are regulated by its own nervous system, and its body controls 96 muscles and more than 3,000 motor units. This accuracy far surpasses the OpenWorm system in many ways.

Making nematodes is just the first step, but I think the world will continue to accelerate in this direction, and it is possible that in 2035, I believe that a lot of data from the human brain will have been parsed.

Fourth, the progress of brain-like vision

The work of the visual system. We have made more than 20 of the 60 kinds of cells in the retina, including various structures of neurons and neural connections, and hope to completely make the primate retina within 5 years.

We must not only do basic scientific exploration, but also do application. So we made the basic principles in the primate retina into a chip and made a camera, which served as a signal input device for the spiking neural network. The camera simulates a part of the mechanism of living things, which is pulse perception, which has a huge improvement in performance. For example, a biological system is about 10 hertz, and an electronic chip can reach 40,000 hertz (400,000 hertz can also do it), and such a chip can see the shock waves generated by the ultra-high-speed wind tunnel.

5. Concluding remarks

1. Artificial intelligence is a technology with the goal of creating more and more intelligent systems, and the brain is the best blueprint.

2. The brain-like path is to first construct an intelligent system, which is a bottom-up methodology, and then to understand the principles of intelligence, and understanding the principles of intelligence should be a matter after the construction of an intelligent system, not a premise. The brain-like road expounded here is to simulate the structure of neural networks and signal processing mechanisms, from simple neural networks such as nematodes and retina, to complex human brains.

3. C. elegansAlthough there are only 302 neurons, fine simulation is difficult, we have only taken a step forward, and there is still a lot of work to be done to do a realistic approximation of C. elegans. However, it does not mean that life simulation is a long-term thing, I believe that like the human genome, it is difficult at the beginning, and it will become faster and faster with the progress of various technological means. It's hard to predict, but I think it's possible to simulate the human brain in about 20 years, and it's possible to make advances in neuroscience with brain elucidation.

4. In terms of brain-inspired vision, there has been a breakthrough in the functional simulation of the retina, and the fine simulation of the structure is about half the way, and it may take five years to simulate the fine retina. Inspiring artificial intelligence is a process of "laying eggs" along the way - structural brain-mimicking, functional brain-like, and performance greatly surpassing the brain, which is many orders of magnitude higher than biological systems, and the application value is also huge.

(References omitted)

AI Study丨 Progress in brain-like research

Excerpt from "Newsletter of Chinese Society of Artificial Intelligence"

Vol. 12, No. 7, 2022

Transcript of the speech

Job

Read on