China research ‘bridges gap’ between power hungry AI models and the human brain

Current AI trends largely revolve around building ever-bigger neural networks, an approach that is fuelling concerns about unsustainable energy demands and a lack of interpretability.

In contrast, the human brain – with its 100 billion neurons and around 100 trillion synaptic connections – consumes about 20 watts of power. At the same time, each of the brain’s neurons is more diverse and complex than any existing AI model.

Researchers Li Guoqi and Xu Bo from the Chinese Academy of Sciences’ Institute of Automation, along with Peking University’s Tian Yonghong, noted that the two models shared a symbiotic relationship.

But while the brain’s neurons produce and transfer complex signals that can change over time, the silicon chip-based AI model – which the researchers described as a “coarse abstraction” of a biological neuron – can only ever generate noughts and ones.

The researchers used a mathematical model first described in 1952 by neurologists Alan Hodgkin and Andrew Huxley to build a neural network that effectively replicated the capabilities of the larger, simpler model in a smaller, internally complex structure.

According to the paper, the network – consisting of four leaky integrate-and-fire neurons – was able to reproduce the behaviour of a single Hodgkin-Huxley neuron through a range of theoretical proofs and simulations.

03:01

AI ‘coaching’ replacing human instructors at hi-tech Chinese driving school

AI ‘coaching’ replacing human instructors at hi-tech Chinese driving school

In an interview with state news agency Xinhua, co-author Li said that the research team’s innovative approach not only maintained performance levels but also doubled processing speed, while cutting memory usage by four times.

“This development could pave the way for optimising AI models in practical applications, boosting performance,” Li said.

“The experimental results confirm that the internal complexity model is effective and reliable for complex tasks, providing new methods and theories for incorporating neuroscience’s dynamics into AI.”

In an article on the paper that appeared in the same publication, Jason Eshraghian, a University of California assistant professor in electrical and computer engineering, said it motivated an exploration of hardware “beyond silicon-based computing”.

“By revisiting and deepening the connection between neuroscience and AI, we may uncover new ways to build more efficient, powerful, and perhaps even more ‘brain-like’ artificial intelligence systems,” he said.

The future of AI development may hinge on combining detailed imitation of biological neuron dynamics with the expansion of larger models and more robust hardware, steered by continuous advancements in neuroscience.

The CAS Institute of Automation marked another significant AI milestone in June when a collaboration with Swiss corporation SynSense yielded Speck – a brain-like neuromorphic chip with integrated dynamic vision sensors.

Speck not only enhances task accuracy by 9 per cent but also reduces average power consumption by 60 per cent, according to its developers.