Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 2634-4386
Neuromorphic Computing and Engineering is a multidisciplinary, open access journal publishing cutting edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective. For detailed information about subject coverage see the About the journal section.
Free for readers. All article publication charges are currently paid by IOP Publishing.
Stay informed about the latest journal news and announcements
Open all abstracts, in this tab
Dennis V Christensen et al 2022 Neuromorph. Comput. Eng. 2 022501
Matteo Cucchi et al 2022 Neuromorph. Comput. Eng. 2 032002
This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online7.
Kevin Hunter et al 2022 Neuromorph. Comput. Eng. 2 034004
In principle, sparse neural networks should be significantly more efficient than traditional dense networks. Neurons in the brain exhibit two types of sparsity; they are sparsely interconnected and sparsely active. These two types of sparsity, called weight sparsity and activation sparsity, when combined, offer the potential to reduce the computational cost of neural networks by two orders of magnitude. Despite this potential, today's neural networks deliver only modest performance benefits using just weight sparsity, because traditional computing hardware cannot efficiently process sparse networks. In this article we introduce Complementary Sparsity, a novel technique that significantly improves the performance of dual sparse networks on existing hardware. We demonstrate that we can achieve high performance running weight-sparse networks, and we can multiply those speedups by incorporating activation sparsity. Using Complementary Sparsity, we show up to 100× improvement in throughput and energy efficiency performing inference on FPGAs. We analyze scalability and resource tradeoffs for a variety of kernels typical of commercial convolutional networks such as ResNet-50 and MobileNetV2. Our results with Complementary Sparsity suggest that weight plus activation sparsity can be a potent combination for efficiently scaling future AI models.
Qing Wan et al 2022 Neuromorph. Comput. Eng. 2 042501
The data throughput in the von Neumann architecture-based computing system is limited by its separated processing and memory structure, and the mismatching speed between the two units. As a result, it is quite difficult to improve the energy efficiency in conventional computing system, especially for dealing with unstructured data. Meanwhile, artificial intelligence and robotics nowadays still behave poorly in autonomy, creativity, and sociality, which has been considered as the unimaginable computational requirement for sensorimotor skills. These two plights have urged the imitation and replication of the biological systems in terms of computing, sensing, and even motoring. Hence, the so-called neuromorphic system has drawn worldwide attention in recent decade, which is aimed at addressing the aforementioned needs from the mimicking of neural system. The recent developments on emerging memory devices, nanotechnologies, and materials science have provided an unprecedented opportunity for this aim.
Maria Elias Pereira et al 2023 Neuromorph. Comput. Eng. 3 022002
Neuromorphic computing has been gaining momentum for the past decades and has been appointed as the replacer of the outworn technology in conventional computing systems. Artificial neural networks (ANNs) can be composed by memristor crossbars in hardware and perform in-memory computing and storage, in a power, cost and area efficient way. In optoelectronic memristors (OEMs), resistive switching (RS) can be controlled by both optical and electronic signals. Using light as synaptic weigh modulator provides a high-speed non-destructive method, not dependent on electrical wires, that solves crosstalk issues. In particular, in artificial visual systems, OEMs can act as the artificial retina and combine optical sensing and high-level image processing. Therefore, several efforts have been made by the scientific community into developing OEMs that can meet the demands of each specific application. In this review, the recent advances in inorganic OEMs are summarized and discussed. The engineering of the device structure provides the means to manipulate RS performance and, thus, a comprehensive analysis is performed regarding the already proposed memristor materials structure and their specific characteristics. Moreover, their potential applications in logic gates, ANNs and, in more detail, on artificial visual systems are also assessed, taking into account the figures of merit described so far.
Michele Di Lauro et al 2024 Neuromorph. Comput. Eng. 4 024001
A novel organic neuromorphic device performing pattern classification is presented and demonstrated. It features an artificial soma capable of dendritic integration from three pre-synaptic neurons. The time-response of the interface between electrolytic solutions and organic mixed ionic-electronic conductors is proposed as the sole computational feature for pattern recognition, and it is easily tuned in the organic dendritic integrator by simply controlling electrolyte ionic strength. The classifier is benchmarked in speech-recognition experiments, with a sample of 14 words, encoded either from audio tracks or from kinematic data, showing excellent discrimination performances in a planar, miniaturizable, fully passive device, designed to be promptly integrated in more complex architectures where on-board pattern classification is required.
Erika Covi et al 2022 Neuromorph. Comput. Eng. 2 012002
The shift towards a distributed computing paradigm, where multiple systems acquire and elaborate data in real-time, leads to challenges that must be met. In particular, it is becoming increasingly essential to compute on the edge of the network, close to the sensor collecting data. The requirements of a system operating on the edge are very tight: power efficiency, low area occupation, fast response times, and on-line learning. Brain-inspired architectures such as spiking neural networks (SNNs) use artificial neurons and synapses that simultaneously perform low-latency computation and internal-state storage with very low power consumption. Still, they mainly rely on standard complementary metal-oxide-semiconductor (CMOS) technologies, making SNNs unfit to meet the aforementioned constraints. Recently, emerging technologies such as memristive devices have been investigated to flank CMOS technology and overcome edge computing systems' power and memory constraints. In this review, we will focus on ferroelectric technology. Thanks to its CMOS-compatible fabrication process and extreme energy efficiency, ferroelectric devices are rapidly affirming themselves as one of the most promising technologies for neuromorphic computing. Therefore, we will discuss their role in emulating neural and synaptic behaviors in an area and power-efficient way.
Ole Richter et al 2024 Neuromorph. Comput. Eng. 4 014003
With the remarkable progress that technology has made, the need for processing data near the sensors at the edge has increased dramatically. The electronic systems used in these applications must process data continuously, in real-time, and extract relevant information using the smallest possible energy budgets. A promising approach for implementing always-on processing of sensory signals that supports on-demand, sparse, and edge-computing is to take inspiration from biological nervous system. Following this approach, we present a brain-inspired platform for prototyping real-time event-based spiking neural networks. The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays. The analog circuits that implement such primitives are paired with a low latency asynchronous digital circuits for routing and mapping events. This asynchronous infrastructure enables the definition of different network architectures, and provides direct event-based interfaces to convert and encode data from event-based and continuous-signal sensors. Here we describe the overall system architecture, we characterize the mixed signal analog-digital circuits that emulate neural dynamics, demonstrate their features with experimental measurements, and present a low- and high-level software ecosystem that can be used for configuring the system. The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
Zirui Zhang et al 2022 Neuromorph. Comput. Eng. 2 032004
Neuromorphic computing systems employing artificial synapses and neurons are expected to overcome the limitations of the present von Neumann computing architecture in terms of efficiency and bandwidth limits. Traditional neuromorphic devices have used 3D bulk materials, and thus, the resulting device size is difficult to be further scaled down for high density integration, which is required for highly integrated parallel computing. The emergence of two-dimensional (2D) materials offers a promising solution, as evidenced by the surge of reported 2D materials functioning as neuromorphic devices for next-generation computing. In this review, we summarize the 2D materials and their heterostructures to be used for neuromorphic computing devices, which could be classified by the working mechanism and device geometry. Then, we survey neuromorphic device arrays and their applications including artificial visual, tactile, and auditory functions. Finally, we discuss the current challenges of 2D materials to achieve practical neuromorphic devices, providing a perspective on the improved device performance, and integration level of the system. This will deepen our understanding of 2D materials and their heterojunctions and provide a guide to design highly performing memristors. At the same time, the challenges encountered in the industry are discussed, which provides a guide for the development direction of memristors.
James B Aimone et al 2022 Neuromorph. Comput. Eng. 2 032003
Though neuromorphic computers have typically targeted applications in machine learning and neuroscience ('cognitive' applications), they have many computational characteristics that are attractive for a wide variety of computational problems. In this work, we review the current state-of-the-art for non-cognitive applications on neuromorphic computers, including simple computational kernels for composition, graph algorithms, constrained optimization, and signal processing. We discuss the advantages of using neuromorphic computers for these different applications, as well as the challenges that still remain. The ultimate goal of this work is to bring awareness to this class of problems for neuromorphic systems to the broader community, particularly to encourage further work in this area and to make sure that these applications are considered in the design of future neuromorphic systems.
Open all abstracts, in this tab
Felix Wang et al 2024 Neuromorph. Comput. Eng. 4 024002
As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited.
In this paper, we focus our attention on Simulation Tool for Asynchronous Cortical Streams (STACS), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends.
We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
Michele Di Lauro et al 2024 Neuromorph. Comput. Eng. 4 024001
A novel organic neuromorphic device performing pattern classification is presented and demonstrated. It features an artificial soma capable of dendritic integration from three pre-synaptic neurons. The time-response of the interface between electrolytic solutions and organic mixed ionic-electronic conductors is proposed as the sole computational feature for pattern recognition, and it is easily tuned in the organic dendritic integrator by simply controlling electrolyte ionic strength. The classifier is benchmarked in speech-recognition experiments, with a sample of 14 words, encoded either from audio tracks or from kinematic data, showing excellent discrimination performances in a planar, miniaturizable, fully passive device, designed to be promptly integrated in more complex architectures where on-board pattern classification is required.
Peng Chen et al 2024 Neuromorph. Comput. Eng. 4 014012
Designing compact computing hardware and systems is highly desired for resource-restricted edge computing applications. Utilizing the rich dynamics in a physical device for computing is a unique approach in creating complex functionalities with miniaturized footprint. In this work, we developed a dynamical electrochemical memristor from a static memristor by replacing the gate material. The dynamical device possessed short-term fading dynamics and exhibited distinct frequency-dependent responses to varying input signals, enabling its use as a single device-based frequency classifier. Simulation showed that the device responses to different frequency components in a mixed-frequency signal were additive with nonlinear attenuation at higher frequency, providing a guideline in designing the system to process complex signals. We used a rate-coding scheme to convert real world auditory recordings into fixed amplitude spike trains to decouple amplitude-based information and frequency-based information and was able to demonstrate auditory classification of different animals. The work provides a new building block for temporal information processing.
Ugurcan Cakal et al 2024 Neuromorph. Comput. Eng. 4 014011
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads, taking advantage of sparse asynchronous computation within spiking neural networks (SNNs). However, deploying robust applications to these devices is complicated by limited controllability over analog hardware parameters, as well as unintended parameter and dynamical variations of analog circuits due to fabrication non-idealities. Here we demonstrate a novel methodology for offline training and deployment of SNNs to the mixed-signal neuromorphic processor DYNAP-SE2. Our methodology applies gradient-based training to a differentiable simulation of the mixed-signal device, coupled with an unsupervised weight quantization method to optimize the network's parameters. Parameter noise injection during training provides robustness to the effects of quantization and device mismatch, making the method a promising candidate for real-world applications under hardware constraints and non-idealities. This work extends Rockpool, an open-source deep-learning library for SNNs, with support for accurate simulation of mixed-signal SNN dynamics. Our approach simplifies the development and deployment process for the neuromorphic community, making mixed-signal neuromorphic processors more accessible to researchers and developers.
Joshua Robertson et al 2024 Neuromorph. Comput. Eng. 4 014010
Inspired by efficient biological spike-based neural networks, we demonstrate for the first time the detection and tracking of target patterns in image and video inputs at high-speed rates with networks of multiple artificial spiking optical neurons. Using photonic systems of in-parallel spiking vertical cavity surface emitting lasers (VCSELs), we demonstrate the implementation of multiple convolutional kernel operators which, in combination with optical spike signalling, enable the detection and tracking of target features in images/video feeds at an ultrafast photonic operation speed of 1 ns per pixel. Alongside a single layer optical spiking neural network (SNN) demonstration, a multi-layer network of photonic (GHz-rate) spike-firing neurons is reported where the photonic system successfully tracks a large complex feature (Handwritten Digit 3). The consecutive photonic layers perform spike-enabled image reduction and convolution operations, and interact with a software-implemented SNN, that learns the feature patterns that best identify the target to provide a high detection efficiency even in the presence of a distractor feature. This work therefore highlights the effectiveness of combining neuromorphic photonic hardware and software SNNs, for efficient learning and ultrafast operation, thanks to the use of spiking light signals, towards tackling complex AI and computer vision problems.
Open all abstracts, in this tab
Lyes Khacef et al 2023 Neuromorph. Comput. Eng. 3 042001
Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of real-time, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if these models can be easily implemented in neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide an overview of representative brain-inspired synaptic plasticity models and mixed-signal complementary metal–oxide–semiconductor neuromorphic circuits within a unified framework. We review historical, experimental, and theoretical approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and postsynaptic neural signals, which we propose as an important requirement for physical implementations of synaptic plasticity circuits. Based on this principle, we compare the properties of these models within the same framework, and describe a set of mixed-signal electronic circuits that can be used to implement their computing principles, and to build efficient on-chip and online learning in neuromorphic processing systems.
Xuan Hu et al 2023 Neuromorph. Comput. Eng. 3 022003
Topological solitons are exciting candidates for the physical implementation of next-generation computing systems. As these solitons are nanoscale and can be controlled with minimal energy consumption, they are ideal to fulfill emerging needs for computing in the era of big data processing and storage. Magnetic domain walls (DWs) and magnetic skyrmions are two types of topological solitons that are particularly exciting for next-generation computing systems in light of their non-volatility, scalability, rich physical interactions, and ability to exhibit non-linear behaviors. Here we summarize the development of computing systems based on magnetic topological solitons, highlighting logical and neuromorphic computing with magnetic DWs and skyrmions.
Maria Elias Pereira et al 2023 Neuromorph. Comput. Eng. 3 022002
Neuromorphic computing has been gaining momentum for the past decades and has been appointed as the replacer of the outworn technology in conventional computing systems. Artificial neural networks (ANNs) can be composed by memristor crossbars in hardware and perform in-memory computing and storage, in a power, cost and area efficient way. In optoelectronic memristors (OEMs), resistive switching (RS) can be controlled by both optical and electronic signals. Using light as synaptic weigh modulator provides a high-speed non-destructive method, not dependent on electrical wires, that solves crosstalk issues. In particular, in artificial visual systems, OEMs can act as the artificial retina and combine optical sensing and high-level image processing. Therefore, several efforts have been made by the scientific community into developing OEMs that can meet the demands of each specific application. In this review, the recent advances in inorganic OEMs are summarized and discussed. The engineering of the device structure provides the means to manipulate RS performance and, thus, a comprehensive analysis is performed regarding the already proposed memristor materials structure and their specific characteristics. Moreover, their potential applications in logic gates, ANNs and, in more detail, on artificial visual systems are also assessed, taking into account the figures of merit described so far.
Pankaj Sharma and Jan Seidel 2023 Neuromorph. Comput. Eng. 3 022001
Mimicking and replicating the function of biological synapses with engineered materials is a challenge for the 21st century. The field of neuromorphic computing has recently seen significant developments, and new concepts are being explored. One of these approaches uses topological defects, such as domain walls in ferroic materials, especially ferroelectrics, that can naturally be addressed by electric fields to alter and tailor their intrinsic or extrinsic properties and functionality. Here, we review concepts of neuromorphic functionality found in ferroelectric domain walls and give a perspective on future developments and applications in low-energy, agile, brain-inspired electronics and computing.
Sergey Prosandeev et al 2023 Neuromorph. Comput. Eng. 3 012002
This review summarizes recent works, all using a specific atomistic approach, that predict and explain the occurrence of key features for neuromorphic computing in three archetypical dipolar materials, when they are subject to THz excitations. The main ideas behind such atomistic approach are provided, and illustration of model relaxor ferroelectrics, antiferroelectrics, and normal ferroelectrics are given, highlighting the important potential of polar materials as candidates for neuromorphic computing. Some peculiar emphases are made in this Review, such as the connection between neuromorphic features and percolation theory, local minima in energy path, topological transitions and/or anharmonic oscillator model, depending on the material under investigation. By considering three different and main polar material families, this work provides a complete and innovative toolbox for designing polar-based neuromorphic systems.
Open all abstracts, in this tab
Hueber et al
Designing processors for implantable closed-loop neuromodulation systems presents a formidable challenge due to the constrained operational environment, requiring low latency and high energy efficiency. 
Previous benchmarks have provided limited insights into energy efficiency and latency. This paper, however, introduces algorithmic metrics that capture the potential and limitations of neural decoders for closed-loop intra-cortical brain-computer interfaces in the context of energy and hardware constraints. This study benchmarks common decoding methods for predicting a primate's finger kinematics from the motor cortex, and explores the suitability for low latency and low compute neural decoding. The study finds that ANN-based decoders provide superior decoding accuracy, requiring a high latency and many operations to decode neural signals effectively. Spiking neural networks emerge as a solution, bridging this gap by achieving competitive decoding performance within sub-10ms while utilizing a fraction of the computational resources. 
This distinctive advantages of neuromorphic spiking neural networks, positions them as highly suitable for the challenging environment of closed-loop neural modulation. Their capacity to balance decoding accuracy and operational efficiency offers immense potential in reshaping the landscape of neural decoders, fostering greater understanding, and opening new frontiers in closed-loop intracortical human-machine interaction.
Halaly et al
Model Predictive Control (MPC) is a prominent control paradigm providing accurate state prediction and subsequent control actions for intricate dynamical systems with applications ranging from autonomous driving to star tracking. However, there is an apparent discrepancy between the model's mathematical description and its behavior in real-world conditions, affecting its performance in real-time. In this work, we propose a novel neuromorphic spiking neural network for continuous adaptive non-linear MPC. By using real-time learning, our design significantly reduces dynamic error and augments model accuracy, while simultaneously addressing unforeseen situations. We evaluated our framework using real-world scenarios in autonomous driving, implemented in a physics-driven simulation. We tested our design with various vehicles (from a Tesla Model 3 to an Ambulance) experiencing malfunctioning and swift steering scenarios. We demonstrate significant improvements in dynamic error rate compared with traditional MPC implementation with up to 89.87% median prediction error reduction with 5 spiking neurons and up to 96.95% with 5000 neurons. Our results may pave the way for novel applications in real-time control and stimulate further studies in the adaptive control realm with spiking neural networks.
Beak et al
A filamentary-based organic memristor is a promising synaptic component for the development of neuromorphic systems for wearable electronics. In the organic memristors, metallic conductive filaments (CF) are formed via electrochemical metallization under electric stimuli, and it results in the resistive switching characteristics. To realize the bio-inspired computing systems utilizing the organic memristors, it is essential to effectively engineer the CF growth for emulating the complete synaptic functions in the device. Here, the fundamental principles underlying the operation of organic memristors and parameters related to CF growth are discussed. Additionally, recent studies that focused on controlling CF growth to replicate synaptic functions, including reproducible resistive switching, continuous conductance levels, and synaptic plasticity, are reviewed. Finally, upcoming research directions in the field of organic memristors for wearable smart computing systems are suggested.
Caravelli
It has been recently noted that for a class of dynamical systems with explicit conservation laws represented via projector operators, the dynamics can be understood in terms of lower dimensional equations. This is the case, for instance, of memristive circuits. Memristive systems are important classes of devices with wide-ranging applications in electronic circuits, artificial neural networks, and memory storage. We show that such mean-field theories can emerge from averages over the group of orthogonal matrices, interpreted as cycle-preserving transformations applied to the projector operator describing Kirchhoff's laws. Our results provide insights into the fundamental principles underlying the behavior of resistive and memristive circuits and highlight the importance of conservation laws for their mean-field theories. In addition, we argue that our results shed light on the nature of the critical avalanches observed in quasi-two dimensional nanowires as boundary phenomena.
Lu et al
Spike-Timing-Dependent Plasticity (STDP) is an unsupervised learning mechanism for Spiking Neural Networks (SNNs) that has received significant attention from the neuromorphic hardware community. However, scaling such local learning techniques to deeper networks and large-scale tasks has remained elusive. In this work, we investigate a Deep-STDP framework where a rate-based convolutional network, that can be deployed in a neuromorphic setting, is trained in tandem with pseudo-labels generated by the STDP clustering process on the network outputs. We achieve $24.56\%$ higher accuracy and $3.5\times$ faster convergence speed at iso-accuracy on a 10-class subset of the Tiny ImageNet dataset in contrast to a $k$-means clustering approach.