Scientists Train Rat Brain Cells to Perform Machine Learning Tasks in Real Time

Scientists at Tohoku University and Future University Hakodate in Japan have successfully trained living rat brain cells to autonomously generate complex mathematical signals using machine learning techniques. The breakthrough demonstrates that biological neural networks, when properly structured, can learn and execute computational tasks in real time, opening an entirely new frontier in computing research .

The research team, publishing their findings in the Proceedings of the National Academy of Sciences in March, created a closed-loop system by integrating cultured rat cortical neurons with high-density microelectrode arrays containing 26,400 electrodes. The neurons were confined to 128 small wells, each roughly 100 by 100 micrometers in size, with an average of 14.6 neurons per well. These wells were connected through microchannels in two different patterns: a lattice design with uniform connections and a hierarchical design with sparser, multi-scale connections .

How Did Researchers Train Biological Neural Networks?

  • Electrode Array Integration: The team used a high-density microelectrode array with 26,400 electrodes to record neural activity and deliver electrical stimulation back to the neurons, creating a feedback loop that cycled roughly every 333 milliseconds.
  • Network Patterning: Instead of allowing neurons to form dense, synchronized networks that fire in lockstep, researchers used microfluidic films to constrain how neurons connected, dramatically reducing neural correlations and increasing the dimensionality of network dynamics.
  • Real-Time Learning Algorithm: The system used an algorithm called FORCE (First-Order Reduced and Controlled Error) learning, which continuously adjusted the decoder to minimize errors between the network's output and target waveforms.

The physical constraint of the microfluidic design proved crucial to success. Without it, cultured neurons form overly synchronized networks that fail to learn any target signals. By confining neurons to small wells and controlling their connections, the researchers reduced pairwise neural correlations from 0.45 in unpatterned cultures to just 0.11 and 0.12 in the lattice and hierarchical designs, respectively .

What Computational Tasks Can Rat Neurons Actually Perform?

The trained neural networks demonstrated impressive versatility. The system successfully learned to generate sine waves with periods of 4, 10, and 30 seconds, as well as triangle and square waves. The same culture preparation could be retrained to oscillate at different frequencies without being rebuilt. Even more remarkably, the system approximated a Lorenz attractor, a three-dimensional chaotic trajectory used in advanced mathematics and physics, with pairwise correlations above 0.8 between predicted and target signals across all dimensions during the learning phase .

The lattice network configuration consistently outperformed the hierarchical design across all target waveforms. Researchers believe this occurred because the lattice's denser intermodular connections produced higher firing rates, giving the linear decoder more signal to work with for accurate computation .

"This work shows that living neuronal networks are not only biologically meaningful systems but may also serve as novel computational resources," stated Hideaki Yamamoto, a professor at Tohoku University's Research Institute of Electrical Communication.

Hideaki Yamamoto, Professor at Tohoku University's Research Institute of Electrical Communication

What Are the Current Limitations of Biological Neural Computing?

However, the system did face significant constraints that researchers openly acknowledged. Performance degraded substantially after training was halted and the system ran autonomously, with mean squared error increasing in 99% of trials. The feedback loop's roughly 330-millisecond latency also limited the system's ability to track fast-changing or sharp-edged waveforms. Researchers noted that reducing this delay through specialized hardware or alternative filtering approaches could expand the range of learnable targets considerably .

These limitations highlight why biological neural networks remain fundamentally a proof-of-concept rather than a near-term practical alternative to silicon-based computing. The researchers identified potential future applications extending to brain-machine interfaces and neuroprosthetic devices, where biological computation could eventually offer unique advantages over purely electronic systems. However, these remain speculative directions for future research rather than demonstrated capabilities of the current system .

The Japanese research team has demonstrated that living neurons are not merely biological curiosities but potentially useful computational substrates for specific types of tasks. The ability to train cultured neural networks to perform defined computational functions opens new questions about how biological systems process information and whether they might eventually complement silicon-based approaches in specialized applications. Whether this technology scales beyond laboratory demonstrations remains an open question, but the fundamental proof of concept is now firmly established .