A self-driving car is driving on a city street. Suddenly, a child rushed towards the road chasing the ball.
Multiple sensors in the vehicle system immediately captured this situation - cameras, lidar and millimeter-wave radar worked at the same time, the dedicated neural network processor and GPU began to operate at high speed, and the entire system was in peak condition It consumes hundreds of watts of power and completes the process from perception to decision-making in about 100 milliseconds through parallel computing.
In the same situation, a human driver can step on the brakes in just a split second. To process this complex scene, the brain only consumes about 20 watts of energy - equivalent to a small light bulb. What’s even more amazing is that through distributed parallel computing, the brain can also handle countless other tasks such as breathing and heartbeat. The brain can perform parallel computing with extremely low energy consumption, can learn quickly from limited experience, and can adaptively handle various unknown situations.
It is this huge efficiency gap that drives scientists to create a new research field - NeuroAI (neuro-inspired artificial intelligence). This emerging field is trying to break the limitations of traditional AI and create smarter and more efficient AI systems by imitating the way the brain works. For example, we can learn from the way the brain processes visual information to improve computer vision; we can refer to the connection patterns between neurons to optimize deep learning networks; we can even learn the brain's attention mechanism to reduce the energy consumption of AI systems.
This article will give you an in-depth understanding of NeuroAI, reviewing its definition, development history, research status and future trends. Combined with the latest research progress shared at the recent NeuroAI academic seminar held by the National Institutes of Health (NIH), explore how NeuroAI simulates the brain's learning mechanism, information processing method and energy utilization efficiency, and how to solve the bottleneck problems currently faced by AI.
What is NeuroAI?NeuroAI embodies the two-way integration of artificial intelligence and neuroscience: on the one hand, it uses artificial neural networks as a new tool for studying the brain. It uses computational models to test our understanding of the nervous system; on the other hand, it draws on the working principles of the brain to improve artificial intelligence systems and transform the advantages of biological intelligence into technological innovation.
For many years, the basic goal of AI research has always been to build artificial systems that can complete all tasks that humans are good at. To this end, researchers continue to turn their attention to neuroscience research for inspiration. Neuroscience inspires advances in AI, and AI provides a testbed for neuroscience models—creating a positive feedback loop that accelerates progress in both fields.
The relationship between AI and neuroscience is actually a mutualistic and symbiotic relationship rather than a parasitic one. AI brings as much benefit to neuroscience as neuroscience brings to AI. For example, artificial neural networks are manyMultiple pairs are at the heart of state-of-the-art neuroscientific models of visual cortex. The success of these models in solving complex perceptual tasks has led to new hypotheses about how the brain might perform similar computations. Artificial “deep” reinforcement learning, a neuro-inspired algorithm that combines deep neural networks with trial-and-error learning, is also a compelling case for the mutual enhancement of artificial intelligence and neuroscience. Not only has it fueled breakthrough achievements in AI (including AlphaGO, which achieved superhuman performance in the game of Go), but it has also inspired a deeper understanding of the brain's reward system.
The development history of NeuroAIThe intertwining of computer science and neuroscience can be traced back to the birth of modern computers. In 1945, John von Neumann, the father of computers, devoted a chapter to discussing the similarity between the system and the brain in his landmark EDVAC architecture paper, and the only citation in the article was a brain research result ( Warren McCulloch & Walter Pitts, 1943). This paper is widely considered to be the first article on neural networks, laying the foundation for decades of mutual inspiration between neuroscience and computer science.
▷Figure 1. The description of the logical computing unit in the computer in von Neumann's paper, which draws on excitatory and inhibitory neurons, source: von Neumann, J. (1993). First draft of a report on the EDVAC. IEEE Annals of the History of Computing, 15(4), 27-75.
The concept of neural networks achieved a major breakthrough in 1958. Frank Rosenblatt published a paper titled "Perceptrons, a Probabilistic Model of Information Storage and Organization in the Brain," which for the first time proposed the revolutionary idea that "neural networks should learn from data rather than fixed programming." This result was reported in the New York Times under the title "Self-learning of electronic 'brains'", setting off an early wave of artificial intelligence research. Although Marvin Minsky and Seymour Papert pointed out the limitations of single-layer perceptrons in 1969, triggering the first "neural network winter", the core concept of "synapses are plastic elements or free parameters in neural networks" has always continued. to date.
▷Figure 2. Perceptron, describing the sources of two different types of neurons, including excitatory and inhibitory neurons: Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain.Psychological Review, 65(6), 386-408.
In the recent development of artificial intelligence, there are many cases inspired by neuroscience. The most representative application of NeuroAI is the highly successful "convolutional neural network" in the field of image recognition, which was inspired by David Hubel and Torsten Wiesel's modeling research on the brain's visual cortex forty years ago. Another typical case is "Dropout technology", which randomly turns off individual neurons in the artificial network during the training process to prevent overfitting. It helps artificial neural networks gain stronger robustness and generalization capabilities by simulating random erroneous discharges of brain neurons.
Three revelations from the brain to NeuroAIIn the past ten years, AI has made significant progress in many fields. It can write articles and obtain lawyer qualifications. , proving mathematical theorems, complex programming, and speech recognition. However, in many aspects such as navigation in the real physical world, planning across time scales, and reasoning about perception, AI's performance is mediocre at best.
As Richard Feynman said: "The imagination of nature far exceeds that of human beings." As the only computing model that can perfectly perform these complex tasks, the brain has been polished by 500 million years of evolution. This allows animals to easily complete complex tasks such as hunting that are still difficult for current AI.
This is the direction that NeuroAI hopes to learn from, and break through the bottlenecks of the current AI system by studying the working mechanism of the brain. In terms of specific implementation, it is mainly reflected in the following aspects:
(1) Genomic bottleneck
Unlike AI, which requires massive data to be trained from scratch, biological intelligence passes through the "genome bottleneck" (genomic bottleneck) effectively inherits evolutionary solutions that enable animals to perform complex tasks based on instinct - "biological intelligence originates from genomic bottlenecks." The genome plays a key role in this process: it provides the basic blueprint for building the nervous system, specifying the pattern and strength of connections between neurons, laying the foundation for an organism's lifelong learning.
▷Figure 3. Species that use innate plus acquired learning strategies will have an evolutionary advantage if they perform better than species that rely solely on innate instincts. Source: Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10, Article 3770.
It is worth noting that the genome does not directly encode specific behaviors or representations, nor does it directly encode optimalprinciple. It primarily encodes connection rules and patterns that need to be learned to produce actual behaviors and representations. The object of evolution is these connection rules, which inspires us to pay more attention to the connection topology and overall architecture of the network when designing AI systems.
This discovery has important implications for the design of AI systems. We can imitate the connection patterns of biological solutions, such as using similar connections in the visual and auditory cortex to design cross-modal AI systems. By compressing the weight matrix through the "genomic bottleneck", we can extract the most critical connection features in the neural network to act as the "information bottleneck" to achieve a more efficient learning method and reduce dependence on training data.
▷Figure 4. The architecture and performance of artificial neural networks designed based on the genomic bottleneck principle in reinforcement learning tasks. Source: Shuvaev, S., Lachi, D., Koulakov, A., & Zador , A. (2024). Encoding innate ability through a genomic bottleneck. Proceedings of the National Academy of Sciences, 121(38), Article e2409160121.
(2) Energy-saving ideas of the human brain
There is a huge gap between artificial neural networks and biological brains in terms of energy consumption. Currently, models such as ChatGPT require at least 100 times the energy consumed by the human brain to conduct real-time conversations. Comparing the energy consumption of the GPU array with the energy consumption of the whole brain significantly underestimates the energy advantage of the brain. In fact, maintaining a conversation only takes up a small part of the brain's energy consumption.
The brain's extremely high energy consumption rate may be due to two key factors-the "sparse" way neurons operate and the brain's high tolerance for noise.
First, most of the energy consumed by neurons is used to generate action potentials, and the energy consumption is roughly proportional to the frequency of nerve impulses. In the cerebral cortex, neurons operate in a sparse fashion, producing only about 0.1 spikes per second on average. In contrast, current artificial networks operate under high energy consumption and high pulse rates. Although the energy efficiency of artificial networks has improved, we are still far from mastering the brain's energy-saving computing model based on sparse pulses.
Secondly, the brain can tolerate the presence of noise. During synaptic transmission, even if up to 90% of impulses fail to trigger the release of neurotransmitters, the brain can still function normally. This is in stark contrast to modern computers. Computer numerical calculations rely on precise 0s and 1s, and even a single bit error can cause catastrophic failure, so a lot of energy is consumed to ensure the accuracy of the signal. If brain-inspired algorithms can be developed that operate in noisy environments, it could lead to significantsignificant energy savings.
(3) Biological systems balance multiple goals
There are also significant differences in goal management between biological systems and artificial intelligence. Current artificial intelligence systems often only pursue a single goal, while organisms need to balance multiple goals on both the immediate and long-term scales, including reproduction, survival, predation, courtship and other aspects (the so-called 4F). We still know very little about the mechanisms of how animals balance multiple goals, largely because we don’t yet fully understand the brain’s computational mechanisms. As we work to develop AI systems that can handle multiple goals, neuroscience research results can provide guidance for AI, and AI models can serve as testbeds to validate theories of multi-goal management in the brain. This reflects the positive interaction between neuroscience and artificial intelligence research, accelerating the development of both fields.
NeuroAI research cutting-edge progressThe cross-integration of neuroscience and artificial intelligence is continuing to deepen. Researchers draw inspiration from biological intelligence systems to explore Covering multiple levels from microscopic molecules to macroscopic systems. The following summarizes a series of NeuroAI innovation results shared at a symposium hosted by NIH in November 2024, which will help deepen our understanding of the nature of intelligence.
1. Biocomputing of astrocytes
In vivo neural networks have the ability to quickly adapt to the environment and learn from limited data. Among them, astrocytes, as carriers of simulated information, play a key role in the slow integration and processing of information in neural networks. This discovery provides a new perspective for understanding the unique properties of biological neural networks, and has important implications for the fields of neuroscience and artificial intelligence.
2. Closing the loop between neuroscience and virtual neuroscience
Significant advances in neurotechnology have enabled researchers to achieve unprecedented coverage of neuronal activity under natural conditions and biophysical details recorded with precision. By developing digital twins and underlying models, researchers can conduct virtual experiments and hypothesis generation, simulate neural activity, and explore brain function in ways that transcend the limitations of traditional experimental methods. This shift to virtual neuroscience is critical to accelerating neuroscience progress and provides insights for developing flexible, safe, and humane AI systems.
3. Advanced neural AI system with dendrites
Dendrites, as the receiving end of neurons, play a key role in biological intelligence. Integrating dendritic features into AI systems can not only improve energy efficiency, but also enhance the system's ability to resist noise and solve problems such as catastrophic forgetting. However, the core functional characteristics of dendrites and how they are used in AI are still unclear, which limits the development of brain-like AI systems. Addressing this question will require in-depth exploration of the anatomical and biophysical properties of neuronal dendrites in different species through interdisciplinary studies. Through the assistance of computational models and the development of new mathematical tools, researchers can better understand dendritic function and thereby promote dendriticThe development of AI systems and a deeper understanding of biological design principles and their evolutionary significance.
4. The future of NeuroAI draws inspiration from insects and mathematics
Although the current AI system is powerful, it requires huge networks, massive data sets and huge energy consumption. support. Compared with natural intelligence, they lack key biological mechanisms such as neuromodulation, neural inhibition, circadian rhythms, and dendritic computation. How to apply these mechanisms to AI to improve its performance has become an important topic. Recently completed studies of the Drosophila connectome provide new research directions that draw inspiration from the simple but efficient biological brain. However, this process faces significant mathematical challenges, especially when dealing with high-dimensional nonlinear dynamic systems such as neural networks.
5. Learning from Neural Manifolds: From Biological Efficiency to Engineering Intelligence
Recent research breakthroughs in the fields of experimental neuroscience and machine learning have revealed the multiple roles of biological systems and AI. Significant similarity in processing information across scales. This provides opportunities for the deep integration of neuroscience and AI in the next decade. This research proposes that the geometric principles of neural network representation and calculation may completely change the way we design AI systems while deepening our understanding of biological intelligence. To achieve this goal, it is necessary to break through the following key areas: 1) develop new technologies to capture the dynamics and changes of neural manifolds at different time scales during behavior; 2) build a theoretical framework connecting single neurons and group computation to reveal efficient Principles of information processing; 3) Cross-modal representation theory and supporting mechanisms that explain the robustness of neural manifolds and their transformations; 4) Drawing on the efficient mechanisms of biological nervous systems to develop computational tools for analyzing large-scale neural data. Through interdisciplinary research in statistical physics, machine learning and geometry, we are expected to develop AI systems that are closer to the characteristics of biological intelligence and have better efficiency, robustness and adaptability.
6. Moving towards insect-level intelligent robots
With modern advances in control theory and AI, modern robots can perform tasks that almost anyone can perform, from climbing ladders to folding laundry. . Looking to the future, how can we design systems that can make autonomous decisions and perform diverse tasks? How to do this using only on-board computers, without relying on cloud services? Drosophila provide us with an ideal research template. We now have the complete connectome of the Drosophila brain and ventral nerve cord, allowing these tiny creatures to autonomously switch multiple behaviors depending on internal and external states. However, to apply this model to robotic systems, it is necessary to obtain higher-resolution connectomics data and expand it to more individual samples; to study the connectome of insects with more complex behaviors such as mantises, and to understand the relationship between neural structure and intelligence. relationships; delve deeper into dendritic and axonal structures beyond simple point-to-point connection models; and develop easily accessible neuromorphic-like hardware that can simulate millions of neurons.
7. Mixed-signal neuromorphic systems are used in the next generation of brain-computer interfaces
Traditional AI algorithms and technologies, although in large-scaleIt is effective in the analysis of analog digital data sets, but it has limitations when applied to real-time processing of sensory data in closed-loop systems, especially in the field of neurotechnology that requires real-time interaction with the nervous system. These limitations are mainly reflected in two aspects: system requirements and energy consumption. At the system level, low-latency local processing is required to ensure data privacy and security; in terms of energy consumption, since wearable and implantable devices need to operate continuously, power consumption must be strictly controlled at the sub-milliwatt level. To address these challenges, research directions are shifting toward bottom-up physical approaches, such as simulating neuromorphic circuits and mixed-signal processing systems. This type of neuromorphic system uses passive sub-threshold analog circuits and data-driven coding methods to achieve complex biomedical signal classification tasks such as epilepsy detection at microwatt-level power consumption.
PostscriptWith the continuous development of the field of NeuroAI, we are standing at a unique historical node: the deep integration of neuroscience and artificial intelligence not only helps us better understand the nature of human intelligence, but also provides a The design of a new generation of AI systems provides new ideas. From genomic bottlenecks to dendritic computing, from energy efficiency to multi-objective balance, biological intelligence systems exhibit various properties that are changing our understanding of intelligence. In the future, with the deepening of brain science research and the advancement of AI technology, NeuroAI may bring a revolution in computing paradigms and help us build artificial systems closer to biological intelligence. This is not only about technological innovation, but also about our deep thinking about the nature of intelligence - in this process, we are not only creating new technologies, but also re-understanding ourselves.