Brain-Inspired Reservoir Computing

Reservoir computing (echo state networks) is a learning system in which only the weights of the output layer (the readout) are adjusted through training, while the reservoir itself is a recurrent neural network with random connections. Because the number of trainable parameters is relatively small, reservoir computing enables time-series learning with fewer training data and lower computational cost. Within this framework, we investigate computational models of brain functions and aim to clarify how they work.

According to the cerebellar reservoir hypothesis, the network formed by granule cells and Golgi cells in the cerebellar cortex serves as the reservoir, while synaptic plasticity in Purkinje cells corresponds to learning in the readout. Based on this hypothesis, we have developed robot control systems using cerebellar models [1] and studied oscillatory-driven reservoir computing [2] which assumes theta-wave input from the hippocampus to the cerebellum. Furthermore, instead of using a randomly fixed reservoir network as is commonly done, we also introduce human brain networks (connectomes) into the reservoir to examine how the non-random connectivity of the brain contributes to computation and learning [3].

[1] Yuji Kawai, Hiroshi Atsuta, and Minoru Asada, Adaptive robot control using modular reservoir computing to minimize multimodal errors, In Proceedings of the 2024 International Joint Conference on Neural Networks, July, 2024.
[2] Yuji Kawai, Takashi Morita, Jihoon Park, and Minoru Asada, Oscillations enhance time-series prediction in reservoir computing with feedback, Neurocomputing, Vol. 648, 130728, 2025.
[3] Yuji Kawai, Jihoon Park, and Minoru Asada, A small-world topology enhances the echo state property and signal propagation in reservoir computing, Neural Networks, Vol. 112, pp. 15-23, 2019.

Comments are closed.