Editorial: Focus on algorithms for neuromorphic computing

Neuromorphic computing provides a promising energy-efficient alternative to von-Neumann-type computing and learning architectures. However, the best neuromorphic hardware is useless without suitable inference and learning algorithms that can fully exploit hardware advantages. Such algorithms often have to deal with challenging constraints posed by neuromorphic hardware such as massive parallelism, sparse asynchronous communication, and analog and/or unreliable computing elements. This Focus Issue presents advances on various aspects of algorithms for neuromorphic computing. The collection of articles covers a wide range from very fundamental questions about the computational properties of the basic computing elements in neuromorphic systems, algorithms for continual learning, semantic segmentation, and novel efficient learning paradigms, up to algorithms for a specific application domain.

The basic computational elements of most neuromorphic systems are spiking neurons. In particular, the leaky-integrate and fire (LIF) neuron is a popular model for biological neurons that can easily be implemented in analog or digital hardware. This neuron model however neglects the fact that biological neurons have a spatial extent and the spatial structure of their dendritic arborizations influences their computational properties. In the first article [1], Stöckel and Eliasmith discuss a family of LIF neurons with passive dendrites using a varying number of compartments. They find that single-layer networks of such neurons with two compartments can better approximate some functions than comparable single-compartment networks with two layers. While three compartments can provide further benefits, adding even more compartments only leads to small improvements. From a general perspective, this indicates that moving some part of the computations to the analog dendritic domain can lead to more powerful and efficient neuromorphic systems.
In the second article [2], Liu et al discuss an interesting application scenario for neuromorphic computing: the processing of mmWave radar data with spiking neural networks. Instead of considering a customized solution for a specific application scenario in this field, the authors propose a general neuromorphic framework, the mm-spiking neural network (SNN), for this modality. They leverage intrinsic advantages of SNNs in processing noisy and sparse data which are in particular helpful for mmWave radar data. More generally, their approach may provide an end-to-end solution to flexibly handle data with temporal dependency acquired by a complex system.
Humans are able to acquire new skills continually while preserving their performance on previously learned tasks. While these continual learning capabilities are natural for humans, our current artificial learning methods struggle in the continual learning setup. For learning at the edge, where the artificial system should continually adapt to the sensory data stream in changing situations, continual learning capabilities are essential. Du et al [3] propose a simple yet effective approach of progressive segmented training for continual learning in a single network. Moreover, their training approach significantly improves computational efficiency, enabling efficient continual learning at the edge which is demonstrated on an FPGA platform.
Kim et al [4] explore the use of SNNs for semantic segmentation tasks beyond image recognition. SNNs offer low-power processing suitable for resource-constrained systems like autonomous vehicles and drones. Traditional SNN optimization methods have been limited to image recognition, but this study introduces SNN optimization techniques for semantic segmentation. The direct training of SNNs with surrogate gradient learning proves to be more effective in reducing latency and improving performance compared to converting ANNs to SNNs. The redesigned SNN segmentation architectures outperform traditional artificial neural networks (ANN) counterparts in terms of robustness and energy efficiency on various benchmark datasets.
Neuromorphic hardware needs specialized training procedures. In particular, since connectivity and synaptic memory is typically the main bottleneck in such systems, training procedures have to respect weight quantization and connectivity constraints of the particular hardware. Petschenig et al propose in [5] a general approach-quantized rewiring-to train neuromorphic hardware while meeting hardware constraints during the entire training process. The authors show that their approach leads to high-accuracy spiking and non-spiking neural networks despite very limited fan-in and weight resolution, thus enabling efficient training of sparsely connected low precision neuromorphic systems.
Since supervised learning of neural networks necessitates large labeled data sets and expensive training procedures in hardware, it is an important question how unsupervised learning techniques can be efficiently realized on neuromorphic systems. Data from experimental neuroscience indicates that local synaptic plasticity rules that depend on the membrane potential of neurons (voltage-dependent synaptic plasticity of VDSP) may play an important role for unsupervised learning in the brain. In [6], Goupy et al apply VDSP to a convolutional spiking neural network. Features of the input are learned in progressive convolutional layers in an unsupervised manner to obtain a discriminative feature representation. This work indicates an important role for local unsupervised learning rules in neuromorphic systems.
What are the best algorithmic concepts for learning and inference in neuromorphic systems? We hope that this collection provides novel ideas and insights into this pressing question for the field of neuromorphic computing and engineering.