NCE focus issue: extreme edge computing

the spiking neurons is constrained by limiting the average spike rate of small populations. The paper shows that these constraints affect key dynamical properties of the LSM, such as the Lyapunov exponent and the separation ratio. Results

Biological intelligence imparts organisms with the ability to overcome a number of key challenges such as adapting to dynamic environments, learning from experience, and making complex decisions, even within a daunting set of constraints (e.g. extremely limited energy). Interestingly, we are encountering several analogous challenges and constraints as artificial intelligence (AI) begins to move from the cloud to the edge in the ever-growing internet-of-things. Neuromorphic computing is poised to play a critical role in moving AI to the edge, as it enables the implementation of state-of-the-art machine learning algorithms (e.g. deep neural networks) on hardware platforms with limited resources (energy, precision, I/O, etc). This NCE focus issue on Extreme Edge Computing brings together a variety of works that are aimed at designing neuromorphic computing for AI at-the-edge. The collection includes four original research articles and one topical review paper, which are briefly summarized below: In the first article, Abernot and Aida [1] present an oscillatory neural network (ONN) design for image edge detection on resource-constrained devices. ONNs perform computations on temporally-encoded information using interactions between coupled oscillators. The ONN coupling factors are tuned during training to sculpt an energy landscape with minima corresponding to desired states for a given input. While recurrent ONNs have been previously explored for associative memory applications, this article proposes new feedforward ONN topologies that can perform classification tasks. The authors demonstrate their feedforward ONN design approach on an image edge detection task in software and FPGA hardware. They show that the technique can find edges in real time for images up to 170 × 170 pixels with qualitative similarity to standard algorithms such as Sobel.
Future implementations of AI for edge devices will undoubtedly leverage unique properties of emerging paradigms such as quantum computing. Quantum computing's extreme parallelism holds significant potential for high throughput computation on large quantities of data associated with AI workloads. In the second article of this focus issue, Ajayan and James [2] propose a hybrid quantum-spiking neural network for image classification. The network uses a feedforward spiking neural network to extract salient features of the input, and then feeds them to a tunable quantum circuit for classification. The authors' results indicate that the design achieves good accuracy on image datasets like MNIST, even in the presence of input noise and quantum errors.
In the third article of this collection, Schuman et al [3] describe a study that compares two fundamentally different learning approaches for training spiking neural networks to control autonomous vehicles. In the first approach, the structure and parameters of neural networks are found using an evolutionary algorithm that starts with a random set of candidate networks and slowly refines them through crossover and mutation operations. In the second approach, the parameters of a network with fixed structure are found using imitation learning. Imitation learning is a supervised training approach where labeled data for a particular task are based on observations of another entity performing the same task. The researchers compared the two algorithms for a scale Formula One racing task, where the spiking neural network had to control a 1/10th scale Formula One car to complete laps around different tracks while avoiding collisions. A key result of the work is that evolutionary algorithms can find much more efficient network structures than supervised learning alone, which will be critical for edge computing applications.
Understanding the impacts of energy constraints on spiking neural networks is vital for realizing the full potential of neuromorphic computing on extreme edge devices. In the article by Fountain and Merkel [4], authors explore the effects of energy constraints on the dynamics and learning capabilities of liquid state machines (LSMs). LSMs are randomly-connected and recurrent spiking neural networks that can extract spatiotemporal features from time series data. In [4], the energy consumption of the spiking neurons is constrained by limiting the average spike rate of small populations. The paper shows that these constraints affect key dynamical properties of the LSM, such as the Lyapunov exponent and the separation ratio. Results indicate that energetic constraints can improve LSM performance of epileptic seizure detection and gait identification tasks by optimizing the network's dynamics.
In the final article of this focus issue, Nguyen et al [5] provide a review of memristor crossbar-based neural networks for edge intelligence. Memristors are electrically tunable resistors that hold promise for efficient implementation of neuromorphic systems, especially at the extreme edge, due to their small footprint, low power consumption, and behavioral similarity to biological synapses. In their review, the authors of [5] discuss key technical challenges associated with employing memristors in neuromorphic computing circuits. Device-level characteristics like low memristor precision and circuit-level non-idealities such as parasitic resistance are described, along with techniques that have been applied to overcome these challenges. This paper provides a good introduction to some of the key opportunities and tradeoffs that need to be considered when designing neuromorphic computing based on memristor devices.
We hope that this collection provides readers with many inspiring viewpoints on the challenges and many exciting opportunities for implementing neuromorphic computing on future edge devices.

Data availability statement
No new data were created or analysed in this study.