Powering an autonomous clock with quantum electromechanics

We theoretically analyse an autonomous clock comprising a nanoelectromechanical system, which undergoes self-oscillations driven by electron tunnelling. The periodic mechanical motion behaves as the clockwork, similar to the swinging of a pendulum, while induced oscillations in the electrical current can be used to read out the ticks. We simulate the dynamics of the system in the quasi-adiabatic limit of slow mechanical motion, allowing us to infer statistical properties of the clock’s ticks from the current auto-correlation function. The distribution of individual ticks exhibits a tradeoff between accuracy, resolution, and dissipation, as expected from previous literature. Going beyond the distribution of individual ticks, we investigate how clock accuracy varies over different integration times by computing the Allan variance. We observe non-monotonic features in the Allan variance as a function of time and applied voltage, which can be explained by the presence of temporal correlations between ticks. These correlations are shown to yield a precision advantage for timekeeping over the timescales that the correlations persist. Our results illustrate the non-trivial features of the tick series produced by nanoscale clocks, and pave the way for experimental investigation of clock thermodynamics using nanoelectromechanical systems.


I. INTRODUCTION
The measurement of time is perhaps the most primitive and also the most important of all physical measurements.Not only are clocks ubiquitous in our daily lives, but they also underpin satellite navigation, magnetometry, and gravimetry [1,2], among many other applications in metrology.However, despite the spectacular precision that is now routinely achieved with atomic clocks, fundamental gaps in our understanding of time at the quantum level still remain.
One fruitful and long-standing [3][4][5] strategy to address this problem is to formulate and study models of clocks as dynamical systems.Recent work in this direction has shown that basic physical considerations, stemming from information theory on the one hand [6][7][8][9][10] and thermodynamics on the other [11][12][13][14], strongly constrain both the internal structure and the achievable performance of timekeeping devices.For example, it is now understood that accurate clocks must produce a lot of entropy [11,13], quantum clocks can be more accurate than classical clocks with the same number of accessible states [9,10,15], and that there exists a tradeoff between a clock's accuracy and its timing resolution [16].Similar constraints on temporal precision known as thermodynamic or kinetic uncertainty relations have been discovered in other, seemingly unrelated contexts, such as biomolecular systems [17][18][19][20] or mesoscopic transport setups [21][22][23].Beyond their foundational importance, moreover, the physical limits of timekeeping have consequences for the fidelity of time-dependent control protocols arising in quantum thermodynamics [24][25][26] and quantum computation [27,28], among other applications.
Here we focus on ticking clocks, which produce a continuous time reference to an external observer, as opposed to stopwatch-like systems (e.g. the Larmor clock [29]) that measure only a single time interval.Any quantum ticking clock comprises two distinct parts: the clockwork and the register [7].The clockwork is a non-equilibrium system whose dynamics generates the ticks, which are then recorded by the register.Separation of clockwork and register ensures that the tick readout does not disrupt the clockwork [30].Although almost any out-of-equilibrium system can act as a clockwork, accurate timekeeping requires temporal probability concentration [14], whereby the irreversible process that generates each tick is permitted to occur only at well-spaced time intervals.For example, while a simple hourglass exploits freefalling mass (e.g.sand) to indicate the passage of time, the escapement of a pendulum clock allows a mass to fall incrementally -thereby generating the next tick -only after one full period of the swinging pendulum has elapsed [13,31].However, much of this new understanding of quantum ticking clocks has been garnered via theoretical analysis of simple toy models.The connection between the abstract theory of autonomous quantum clocks and concrete experimental implementations remains lacking.
Here, we fill this gap, by proposing and analysing an autonomous quantum clock based on nanoelectromechanical self-oscillations, which have recently been observed in the laboratory [32,33].A self-oscillator [34] is a system that oscillates in the absence of periodic forcing, and therefore provides the ideal setting to realise an autonomous clockwork.In nanoelectromechanical self-oscillators, the periodic motion of a mechanical degree of freedom is driven by electron tunnelling [35].This can occur either by electron shuttling [36][37][38][39][40][41], where single-electron tunnelling resonantly drives the oscillator, or via a subtler electromechanical instability that emerges from the time-delayed effect of many non-resonant tunnelling events [32,33,[42][43][44][45][46].Thermodynamic aspects of both kinds of self-oscillations have recently been investigated [47][48][49], but their potential for studying the thermody-namics of timekeeping has not yet been studied in detail.
In this work, we consider an autonomous clock powered by electromechanical self-oscillations of the latter kind, motivated by the experiments reported in Refs.[32,33].In our model, periodic mechanical motion provides the clockwork while the ticks are registered by monitoring the induced oscillations in the electronic current.As in previous studies, the autonomous nature of the clockwork dynamics allows for a selfcontained analysis of the thermodynamic resources needed to measure time [11].However, our work goes beyond existing literature on clock thermodynamics in at least two important respects.
First, our electromechanical clockwork is based on the oscillations of a continuous degree of freedom, reminiscent of a pendulum.Yet, unlike a pendulum clock, or indeed any other clock considered in the literature to our knowledge, these oscillations are driven by measurement backaction associated with the tick readout itself.By contrast, existing theoretical models of quantum clocks consider the tick generation and readout to arise from distinct irreversible processes [9-11, 14-16, 30], while experimental work on the thermodynamics of timekeeping has either lain in a purely classical regime where measurement backaction is negligible [50] or focussed purely on the role of measurement backaction in a non-autonomous setting [51].Our setup is also distinct from Ref. [? ], which considered a quantum clock powered by measurements that are unrelated to tick readout.
Second, we go beyond the performance measures introduced in previous works [10,11] by evaluating the Allan variance [52], which quantifies clock accuracy over different integration times.At intermediate timescales, our results corroborate the general expectation that higher accuracy entails more dissipation.However, at longer timescales this trend reverses.To explain this crossover in the Allan variance, we show that the non-trivial dynamics of the clockwork generates persistent temporal correlations between the clock's ticks.Our results thus call for further research to understand both the origin and consequences of such temporal correlations, which have been largely neglected in previous studies of autonomous quantum clocks.
An outline of the manuscript is as follows.We first introduce our model for an electromechanical clock (Sec.II A), and explain how we model the clockwork's nonequilibrium dynamics (Sec.II B) and the readout of ticks from oscillations induced in the electrical current (Sec.II C).Our modelling is based on the quasi-adiabatic assumption, i.e. that the mechanical motion is slow compared to electronic relaxation, which is well satisfied in current experiments [32,33].Then we examine the induced current oscillations in detail, identifying an interesting separation of timescales between amplitude and phase fluctuations (Sec.III A).Next, we analyse the distribution of individual ticks of the clock (Sec.III B) and find a tradeoff between accuracy, resolution, and entropy production, as expected from previous work [11].Finally, we show that the Allan variance exhibits a non-trivial dependence on integration time and applied voltage (Sec.III C), which can be understood in terms of correlations between the clock's ticks that we quantify explicitly (Sec.III D).We conclude with a summary and a brief outlook on future work (Sec.IV).The reduced Planck constant, the Boltzmann constant, and the electron charge are all set to unity throughout this work.

A. Description of the setup
We consider an electromechanical system similar to the model outlined in [49].A schematic of the setup is shown in Fig. 1.The clockwork consists of a resonant-level model coupled to a harmonic oscillator.The oscillator motion is in the direction perpendicular to the current flow (parallel to the plane defined by the leads), as shown in Fig. 1.The Hamiltonian of the clockwork is given by a sum of the following terms: Here, ĤS is the Hamiltonian of a single-level fermionic quantum dot, described by its energy ϵ and the annihilation operator ĉ.The leads coupled to the quantum dot are defined by the Hamiltonian ĤB .Each lead is described by an infinite sum of noninteracting Fermi modes with energy Ω kα and annihilation operator dkα , where α = L, R denotes the left or right lead.The tunnelling of electrons between the dot and the leads is described by ĤT , with g kα the coupling strength.The vibrational Hamiltonian ĤV describes a single-mode mechanical oscillator constrained to move in one dimension, with momentum operator p, position operator x, natural oscillation frequency ω 0 , and mass m.The final term in ĤV describes the electromechanical interaction between the quantum dot and the harmonic oscillator, where F is the force per unit charge and n is the excess charge present on the quantum dot, i.e. n = ĉ † ĉ − N 0 , with N 0 = ⟨ĉ † ĉ⟩ F=0 being the average charge on the oscillator in the absence of the electromechanical coupling.This kind of coupling can be induced, for example, by means of a nearby gate electrode held at a constant potential, which produces an electrostatic force on the oscillator depending on the number of charges present in the device [32] (e.g. in Fig. 1 this force would act vertically).
The leads are initially at equilibrium with equal inverse temperature β but different chemical potentials, µ L µ R , with V = µ L − µ R denoting the voltage bias.The tunnelling rate for each lead is determined by its spectral density Self-sustained oscillations require an energy-dependent spectral density [44].As in our previous work [49], we assume the simple Lorentzian form which models a single band of states with bandwidth δ centred on energy ω α , with Γ representing the overall tunnelling rate.We focus on the quasi-adiabatic limit, where the relaxation rate of the electronic degrees of freedom are on a much faster timescale than the characteristic timescale of the oscillator.This is tantamount to the conditions where λ = F/ √ mω 0 is the effective electromechanical coupling strength.These conditions are well satisfied in typical experiments [32,33] and also by our specific parameter choices below.Note that our minimal model neglects charging effects due to Coulomb interaction as well as intrinsic mechanical damping of the oscillator, since neither of these effects are crucial for the functioning of the clock.

B. Langevin equation for the clockwork dynamics
When the quasi-adiabatic conditions (7) hold, the electronic degrees of freedom can be adiabatically eliminated to obtain a description for the oscillator alone.Full details can be found in the Supplemental Material of Ref. [49], but we briefly sketch the derivation here because it is serves as a precursor for the modelling of tick readout described in Sec.II C.
We divide the Hamiltonian into two parts as Ĥ = Ĥel + ĤV , where Ĥel = ĤS + ĤB + ĤT describes the electronic degrees of freedom and ĤV describes the oscillator dynamics including the electromechanical coupling.We now introduce the Wigner transform of the density matrix where |x⟩ is a position eigenstate for the oscillator, so that ρ(x, p) acts as a quantum operator on the electronic Hilbert space.The Wigner function of the oscillator follows from W(x, p) = tr[ ρ(x, p)], while the electronic state is given by ρel = dx dp ρ(x, p).Using the operator correspondences for the Wigner function [53], one finds that the quantum Liouville equation d ρ/dt = −i[ Ĥ, ρ] is equivalent to where we define the following superoperators: Here, ⟨•⟩ x = tr • ρx denotes an average with respect to the electronic stationary state, ρx , which depends on x but not on p, and satisfies L x ρx = 0 and tr ρx = 1.Physically, L x represents the evolution of the electronic degrees of freedom conditioned on a particular oscillator position x, H represents the free mechanical evolution including the mean-field electromechanical force, and F represents the fluctuating force on the oscillator due to deviations of the electronic charge from its steady-state mean value.
By assumption (i.e. because of the conditions ( 7)), we have L x ≫ H, F so that the electronic degrees of freedom are only weakly perturbed from their instantaneous steady state ρx .This is formalised by defining, for any operator-valued function Â(x, p) of the oscillator's phase-space coordinates, the projector whose orthogonal complement is denoted by Q = 1 − P. The "relevant" part of the density matrix P ρ(x, p) determines the Wigner function, since W(x, p) = tr P ρ(x, p).Thus, following the standard procedure [54], we can perturbatively eliminate the irrelevant part, Q ρ(x, p), taking into account contributions up to second order in F and H within a Born-Markov approximation (see Ref. [49] for details).This results in a Fokker-Planck equation for the Wigner function, which is equivalent to the classical Langevin equation [44][45][46] where ξ(t) is a white-noise force with zero mean, E[ξ(t)] = 0, and autocorrelation function E[ξ(t)ξ(t ′ )] = δ(t − t ′ ), while γ x and D x are the damping and diffusion terms respectively, given by with S x (ω) the noise spectrum of the fluctuating force: In particular, since the oscillator's Wigner function is positive within the quasi-adiabatic approximation, it can be reconstructed by stochastically sampling the solutions of Eq. ( 14).The dynamics of the clockwork depends qualitatively on the voltage bias.For zero or very low voltage bias, the leads behave effectively like an equilibrium bath for the oscillator, such that the fluctuation-dissipation relation βD x = 2mγ x is approximately satisfied.The Langevin equation ( 14) then predicts a steady state for the oscillator that is almost thermal.Conversely, when the voltage exceeds a certain threshold value, electron tunnelling drives self-oscillations of the mechanical motion.In this regime, the damping rate γ x entering the Langevin equation ( 14) becomes negative for small x.As a result, the long-time state of the oscillator predicted by Eq. ( 14) is a limit cycle that oscillates with period ∼ 2π/ω 0 [32,49].

C. Electrical current and current noise
Under a voltage bias, electrical current flows continuously through the system.Mechanical oscillations induce variations of this current that can be monitored to generate a series of ticks, as depicted in Fig. 1 and explained in more detail in Sec.III A. A key assumption of our work is that the current signal is effectively classical, i.e. it can be read out without additional measurement backaction beyond that which is already contained in the damping and diffusion terms of Eq. ( 14).This is justified in the quasi-adiabatic limit, because a very large number of electrons tunnel through the device within a given oscillation period.In practice, oscillations in the current can be continuously monitored by using a resonant tank circuit to filter frequency components near the relevant frequency, together with quantum-limited amplification [32,55].
To find how the oscillator motion modifies the current, we use the fact that ρ(x, p) ≈ P ρ(x, p) to lowest order in small quantities [49].The electronic state at long times is therefore given to lowest order by where W(x, p) is the steady-state Wigner function and P(x) = dpW(x, p) is the corresponding marginal probability distribution for the oscillator's position.Equation ( 17) can be used to compute observables of the electronic system.In particular, the average of the electronic current operator, Î, is given to lowest order by Here we have defined ⟨ Î⟩ x ≡ tr Î ρx as the conditional expectation value of the current operator, given the oscillator's position x.
While the ensemble-averaged current tends to the stationary value (18) at long times, individual trajectories of the measured current will show periodic oscillations.This is because fluctuations of the oscillator position are transduced into current fluctuations, as characterised by the symmetrised autocorrelation function [56] where all averages are taken with respect to the steady-state density matrix, ρ.Let us define an auxiliary density operator such that the correlation function can be found from the real part of where K(x, p, t) is the Wigner transform of of K(t), defined analogously to Eq. ( 8).Since K(t) obeys the Liouville-von Neumann equation we have that K(x, p, t) obeys the same equation of motion as ρ(x, p)): namely, where the superoperators on the right-hand side are defined in Eqs. ( 10)- (12).We can therefore approximate the solution to this equation following an adiabatic elimination procedure analogous to Sec.II B, but where now both P and Q projections are needed to reconstruct the correlation function.
As shown in Appendix A, this procedure yields the current autocorrelation function In these expressions, all averages E[•] are taken over oscillator trajectories x(t) sampled from the Langevin equation ( 14), with initial condition x(0) distributed according to the steadystate position distribution, P(x).Equation ( 24) derives from the Q component of the correlation function, which is effectively delta-correlated in comparison to the slow timescales of the oscillator dynamics.This term represents the quantum shot noise of the current, ∆ x , averaged over the steady-state position distribution.Equation ( 25) is the P component of C(t): it describes slow fluctuations of the current induced by stochastic variations of the oscillator position, x(t).
For our model, both the conditional average current ⟨I⟩ x and current noise ∆ x can be found using Landauer-Büttiker theory [57], as Here, is the Fermi-Dirac distribution for lead α, while τ x (E) is the transmission function of the resonant-level model with the position-dependent energy ϵ − F x [57]: where κ α (E) is the spectral density [Eq.( 5)] of lead α = L, R and the corrresponding self-energy is with a principal-value integral.
The above equations are valid assuming coherent electron transport through the device, so that we can neglect Coulomb interactions and inelastic scattering.However, our conclusions are not strongly dependent on the choice of transport model.We have checked explicitly that qualitatively similar results are obtained using a sequential-tunnelling (i.e.rate equation [58]) description of transport.

Current trajectories and timescale separation
As described in Sec.II C, the mechanical motion induces changes in the measured current.Therefore, mechanical selfoscillations will lead to a periodic current signal, which can be used to measure the passage of time.The accuracy of such timekeeping is then limited by stochastic fluctuations in the current signal, as characterised by the autocorrelation function C(t).The most relevant fluctuations for timekeeping are encoded in the slow part of the correlation function C P (t) [Eq.(25)] because this varies on the same timescale as the clockwork dynamics.The fast fluctuations described by C Q (t) merely add a constant to the current noise spectrum, as discussed at the end of this section.
To characterise the properties of the measured current signal, we solve the Langevin equation ( 14) using a stochastic Euler method and convert trajectories x(t) into a corresponding current signal ⟨ Î⟩ x(t) via Eq.( 26).Fig. 2 shows a sample current trajectory within the parameter regime where selfoscillations occur.We use the same parameters as in Ref. [49], where the transition to self-sustained oscillations was found to occur at V ≈ 30ω 0 .We observe near-periodic oscillations of the current on shorter timescales (Fig. 2 inset), with a peak-topeak amplitude that varies significantly over longer timescales (Fig. 2 main panel).We also see that the oscillation frequency is approximately twice the natural frequency of the mechanical oscillator, and that the maximum oscillation amplitude has a fixed value that does not fluctuate.Both of these observations can be understood by the fact that the maximum current is obtained whenever the oscillator's position crosses x = 0.A natural way to measure time is therefore to register a tick of the clock whenever the current reaches its maximum value, as illustrated in the inset of Fig. 2. Note that while the precise position x at which the current is maximised depends on the specific choice of parameters, other parameters would yield qualitatively similar results.Figure 3 shows the correlation function C(t) for three different applied voltages, spanning the transition from thermal noise at low bias to self-sustained oscillations at high bias.At low voltage we observe decaying oscillations, as would be expected for the thermal fluctuations of an underdamped harmonic oscillator [59].Meanwhile, at voltages above the threshold for self-sustained oscillations, two distinct timescales can be discerned.The first, fast timescale governs the decay of oscillations, which results from phase diffusion on the limit cycle.The second, slow timescale dictates the asymptotic decay of the correlation function: for example, in the lower panel of Fig. 3, oscillations decay over a few hundred oscillation periods, while the correlation function itself does not decay completely until approximately 10 4 periods have elapsed.This latter decay is monotonic rather than oscillatory and can, therefore, be understood as a consequence of long-lived amplitude correlations, as we now explain.

Modelling amplitude and phase fluctuations
To elucidate the different roles of amplitude and phase fluctuations, in Appendix B we derive an effective equation of motion describing small fluctuations of the oscillator trajectory away from the limit cycle.We express the results in polar coordinates, i.e. x = A cos(ϕ) and ẋ/ω 0 = −A sin(ϕ).An ideal limit cycle would have a constant amplitude A = A 0 and a linearly increasing phase ϕ = ω 0 t.Taking into account fluctuations away from the limit cycle up to first order, we obtain an Ornstein-Uhlenbeck process describing amplitude variations, and a biased diffusion process describing the phase evolution, viz.
Here, γ A is the damping rate governing the regression of amplitude fluctuations back to the limit cycle A 0 , dw A and dw ϕ are independent Wiener increments describing amplitude and phase fluctuations, respectively, and D A and D ϕ are the corresponding diffusion coefficients.Our simplified model assumes that the limit-cycle amplitude is large and that the timescale of phase and amplitude diffusion is small compared to the oscillation frequency, which justifies approximating the amplitude and phase noise as independent even though they derive from the same Wiener process.The explicit expressions for the above parameters are cumbersome (see Appendix B for details), but a straightforward numerical evaluation shows that γ A ≫ D ϕ in the high-bias regime where Eqs. ( 30) and (31) are valid.This indicates that small amplitude variations evolve significantly faster than phase decoherence, and therefore this model alone cannot explain the timescale separation evident in Fig. 3.We thus turn to an alternative toy model that captures the large-scale amplitude variations seen in Fig. 2. Specifically, the trajectory shown in Fig. 2 exhibits rare periods where the oscillation amplitude reduces dramatically, so that the current remains close to its maximum value for a short time before returning to its previous oscillatory state.We model this behaviour as a bistable process, where the stochastic current switches between two states labelled by i = 1, 2, with cor-responding currents I 1 (t) and I 2 (t).This kind of bistable behaviour was predicted for electromechanical self-oscillations in Ref. [46], and electromechanical bistabilities have recently been observed experimentally [60].We assume that these two states have different steady-state averages I i = E[I i (t)], and that they are statistically independent from each other and from the switching process itself.Under these assumptions, we show in Appendix C that the steady-state autocorrelation function takes the form where p i is the steady-state probability to be in state i, p j|i (t) is the probability to transition i → j over a time t, and The first term in Eq. ( 32) clearly quantifies correlations within each state.The second term, C 1↔2 (t), is the contribution from the bistability itself: its time dependence arises only from the statistics of the switching process, while it depends on the current only through the conditional averages, I i .Therefore, the autocorrelation function will exhibit a separation of timescales whenever the switching process is much slower than phase and amplitude fluctuations of the current within each state.In Appendix C we demonstrate this explicitly, showing that this simple bistable model recovers the features of the auto-correlation function above threshold in Fig. 3. Crucially, using the conservation of probability, Therefore, it is important that the cycle-averaged current depends on the oscillation amplitude, which is indeed the case in our setup (cf.Fig. 2).

Current noise power spectrum
Finally, it is instructive to analyse the current noise in the Fourier domain via the power spectrum In addition to a flat white-noise contribution from the quantum shot-noise term C Q (t), we observe two distinct peaks in the power spectrum at ω = 0 and ω ≈ 2ω 0 , reflecting the separation of timescales inherent to C P (t).The height of both peaks grows with voltage bias, but for different reasons.The zero-frequency peak is associated with amplitude noise and its height scales with the lifetime of amplitude correlations (see Appendix C), which becomes very large well above threshold.This growth of the zero-frequency noise is well-known in other non-equilibrium systems with long-lived metastable states [46,61].Meanwhile, the peak in S(ω) at ω ≈ 2ω 0 is related to electromechanical oscillations, and is plotted in Fig. 4. The height of this peak scales with the oscillation amplitude, which is of course larger in the self-oscillating regime.As the voltage increases, the position of the finite-frequency peak shifts slightly downwards in frequency, while its width becomes narrower.This implies that the ticks of the clock become more regular in time, while their frequency slightly decreases.As discussed in Sec.III B, these results indicate an increase in the clock's accuracy as the rate of dissipation increases, accompanied by a small decrease in the clock's resolution.This is consistent with previous studies of autonomous clocks [11,14,16,50].

B. Accuracy and resolution from the tick distribution
We now turn to a more detailed analysis of the individual ticks of the clock.The tick distribution or waiting-time distribution (WTD) is defined as the probability, W(τ), of a time delay τ between two consecutive ticks.In simple models of autonomous clocks, each tick is assumed to be generated by an identical process, leading to the waiting times being independent and identically distributed (i.i.d.) random variables.Then, the WTD fully characterises the statistical properties of the time signal.
In particular, the mean and variance of the WTD are The clock's resolution ν and accuracy N are then defined as [11] The resolution is simply the rate at which the ticks are generated, while the accuracy is the expected number of ticks before the clock's reading is off by one tick.To see this, suppose that n ticks spaced by waiting times τ 1 , τ 2 , . . .τ n have been generated.The clock's time reading is t n = n i=1 τ i , with average E[t n ] = nµ and root-mean-square error ∆t n = √ nσ due to the assumption of i.i.d.ticks.The accuracy is then the effective number of ticks N such that ∆t N = µ, which leads directly to Eq. (38).
We construct the WTD for our model numerically as a normalised histogram of the waiting times between all pairs of consecutive ticks in the time series.The results are plotted in Fig. 5 for three different voltages.Above threshold, we find that the WTD is well approximated by an inverse Gaussian (Wald) distribution As shown in Ref. [62], Eq. ( 39) can be derived by considering the distribution of the first-passage time τ for the oscillation phase on the limit cycle to complete one full 2π rotation.This result holds when the phase is described by a continuous-time biased random walk, as in Eq. (31).The solid curves in Fig. 5 show the best fits of the numerical WTDs to Eq. ( 39), with µ and σ as fitting parameters.Notably, as the bias across the system is increased, the mean of the distribution increases and the variance of the distribu-  tion is decreased.This indicates that the clock's accuracy increases as the bias across the system is increased, while its resolution decreases slightly.Increasing the bias also increases the average rate of entropy production, which is proportional to the power dissipated in the circuit: Figure 6 shows the accuracy and resolution of the electromechanical clock against the entropy produced per tick, ∆S tick = Σ/ν.We observe an approximately linear relationship between accuracy and entropy production during the onset of self-sustained oscillations, showing that accurate timekeeping comes at the cost of increased dissipation.However, as the voltage is increased further, the growth of accuracy slows and begins to saturate to a value of around 4x10 4 : at this point, additional dissipation gives little further advantage to timekeeping.
Meanwhile, the clock's resolution decreases and eventually saturates with increasing dissipation.However, the change in resolution is minimal for this model, which is unsurprising given that the tick period is fundamentally fixed by the physical oscillator frequency.The small decrease in resolution is probably due to effective non-linearities associated with the position dependence mean-field force ∝ F ⟨n⟩ x and damping rate γ x , whose effect will change as the oscillator explores a larger region of phase space.

C. Allan variance
The measures of accuracy and resolution considered in Sec.III B are only strictly applicable when the waiting times are i.i.d.random variables.The latter condition is equivalent to the statement that the total number of ticks is a renewal process [63].However, this assumption does not exactly hold because the limit cycle is mesoscopic in scale and subject to strong amplitude fluctuations.While the amplitude and phase noise can be approximated as independent for very small deviations from the limit cycle [see Sec.III A 2], this conclusion does not hold in general.Larger amplitude fluctuations will become correlated with phase noise because they both derive from the same underlying stochastic force induced by electron tunnelling.Therefore, differences in the state of the system after each tick may cause the next tick to be correlated with the previous ones.
Given that the ticks of our mesoscopic electromechanical clock are correlated, we need more a sophisticated measure of performance than the accuracy and resolution defined in Eqs.(37) and (38).We focus instead on the Allan variance [52], which measures the frequency stability of a stochastic process and is widely used to test the stability of sensors and clocks.The Allan variance quantifies the deviation of a measured clock signal θ, from a second reference clock.For simplicity, we take this reference to be an ideal clock whose reading is the coordinate time, t, so that the measured deviation is Then, y(t) = dX/dt is the instantaneous frequency difference between the measured clock and the reference clock at a time t.This frequency deviation is estimated by observing the clock's reading over a finite integration time T , leading to the average fractional frequency difference The Allan variance is then proportional to the mean-square deviation of successive average fractional frequency differences thus quantifying how instability in the average frequency scales with integration time.In the case of deterministic signals, where the frequency remains constant, the anticipated frequency deviation is zero.Consequently, a clock that closely adheres to deterministic behaviour -and is, therefore, more accurate -would exhibit a smaller Allan variance.We compute the Allan variance for our electromechanical clock by partitioning the stochastic current trajectories into non-overlapping regions of equal duration T .The measured deviation pertaining to the n th region, X n = X(nτ), is evaluated by counting the number of registered ticks up to time t = nτ.Finally, the Allan variance is computed as The behaviour of the Allan variance as a function of T may be sensitive to the precise choice of T values -we have repeated our calculations for different values of T to ensure that the results are robust.
The results for the Allan variance are plotted in Fig. 7, for a range of different voltages.In general, the Allan variance decreases systematically with T : evidently, integrating the clock  signal over a longer time yields greater precision.This dynamic highlights the trade-off between the clock's timekeeping accuracy and resolution [16].Note that here σ 2 y decreases indefinitely with T , whereas real experiments usually find that the Allan variance begins to grow beyond a certain integration time.This is typically due to the influence of low-frequency fluctuations, e.g.due to temperature variations or the gradual degradation of the apparatus over time, which are absent in our theoretical model.
At short and intermediate timescales, we observe from Fig. 7 that higher voltage leads to a smaller values of σ 2 y , demonstrating a quantitative link between accuracy and the rate of energy dissipation.It follows that the relationship between timekeeping precision and entropy production extends over multiple ticks of the clock, beyond the simple accuracy measure of Eq. ( 38) that only accounts for the single-tick WTD.At long times, all curves in Fig. 7 tend to the approximate scaling σ 2 y ∼ T −1 .This is consistent with known results for the long-time Allan variance for a renewal process (see Appendix D or Ref. [30]): which is fully determined by the clock's accuracy N defined in Eq. (38).A similar 1/T scaling is observed in Fig. 7 across essentially all times for voltages below the threshold for selfoscillations.Interestingly, however, the Allan variance initially decreases slightly faster than T −1 for voltages above threshold.This trend continues up to an intermediate timescale on the order of a few thousand ticks, after which the slope of σ 2 y (T ) changes dramatically and the advantage in timekeeping precision due to self-oscillations is eventually lost altogether.The non-trivial behaviour of the Allan variance as a function of T indicates that temporal correlations play a significant role in determining the clock's perfomance over multiple ticks.48)] between a tick and the m th subsequent tick, i.e. the pairwise correlations between the waiting times τ i and τ i+m .The inset shows the relative entropy [Eq.(47)] between the distribution of the n-tick clock reading P(t n ) and the n-fold convolution of single-tick WTD, which measures the total correlations among all n ticks.Parameters are the same as Fig. 2.

D. Quantifying correlations between ticks
In this section we turn to explicitly quantifying correlations between the ticks of the clock, which are responsible for the faster decrease of σ 2 y (T ) above threshold.We consider two different measures of statistical correlations.First, we consider the clock reading after n ticks, t n = n i=1 τ i , which is the sum of n consecutive waiting times τ 1 , τ 2 , . . ., τ n .Let P n (t n ) be the corresponding probability distribution.If the ticks were generated by a renewal process, so that the individual waiting times were i.i.d., this distribution would be given by the n-fold convolution of single-tick WTDs, i.e.
We use the relative entropy (Kullback-Leibler divergence) to measure the distance between P n (t n ) and the i.i.d.result in Eq. ( 46): Since we are considering a stationary process, if the waiting times are not i.i.d. then they are not independent.Equation (47) can thus be understood as a measure of the total statistical correlations among all n ticks.The inset of Fig. 8 demonstrates that the ticks are indeed correlated: the relative entropy grows as a function of n.Moreover, this growth is faster for higher voltages, indicating that correlations between ticks become more significant in the self-oscillation regime.Since these correlations should have a finite range in time, we expect the total correlations to eventually saturate as n increases.Unfortunately, we are unable to reliably extend our calculation of D KL (P n || ⊛ n W) to higher values of n: the results become very noisy due to the limited quantity of available numerical data and the relative complexity of the n-fold convolution.
To estimate the timescale over which correlations decay, we therefore focus on a simpler measure of correlations: the mutual information between pairs of ticks that are m ticks apart.This is defined as where W(τ i , τ i+m ) is the joint distribution of the waiting times τ i and τ i+m , W(τ i ) is the WTD (identical for τ i and τ i+m ) and H[W(τ)] = − dτW(τ) ln W(τ) denotes the Shannon entropy.Unlike the relative entropy, which measures the total correlations among all ticks, Eq. ( 48) quantifies only correlations between specific pairs of ticks and can be computed reliably over long timescales using our numerical data.The mutual information is plotted in the main panel of Fig. 8.This plot demonstrates that indeed the correlations are short-ranged in time: I(τ i : τ i+m ) is largest for small m and at lower voltages.However, especially at voltages above threshold, these correlations persist over long timescales on the order of several thousand ticks.This is the same timescale at which the non-monotonic behaviour of the Allan variance in Fig. 7 appears.
We thus conclude that σ 2 y decreases more rapidly above threshold because of temporal correlations between the ticks.These correlations provide an advantage in timekeeping precision, so long as the integration time T is not much greater than the correlation time.By contrast, when T far exceeds the range of temporal correlations, the integrated signals in two neighbouring time periods become effectively independent and the behaviour of the Allan variance reduces to that of a renewal process, i.e. σ 2 y ∼ T −1 .These correlations are likely dictated by several factors acting in tandem.The timescale over which they persist appears to be far longer than the timescale of pure phase decoherence apparent in Fig. 3, indicating that amplitude fluctuations also play a role.This can be understood from the fact that amplitude and phase fluctuations are generally not independent, meaning that a variation in amplitude after one cycle can influence the time taken for the next cycle to be completed.The timescale over which tick correlations persist will then be determined by a combination of phase decoherence, amplitude fluctuations near the limit cycle, and large-scale amplitude variations due to bistability in the current signal.

IV. CONCLUSIONS AND OUTLOOK
In this work, we have introduced a self-contained model of an autonomous clock, where the clockwork is an electromechanical self-oscillator and the ticks of the clock are read out from the electrical current passing through the system.A peculiar and novel feature of our setup is that the clockwork is driven by the same irreversible process that is used for tick readout -the tunnelling of electrons through the device.We have confirmed that the clock's accuracy increases with the rate of entropy production, as the current drives the oscillator into a limit cycle with long-lived phase coherence.We have identified that both phase and amplitude fluctuations play an important role in determining the statistical properties of the measured current.
A key finding of our work is that the ticks of the clock are generally correlated with each other and that these correlations can improve the clock's accuracy, as quantified by the Allan variance.It remains an open problem to understand under what conditions such correlations can improve clock accuracy in general.We note that temporal correlations have recently been considered in the context of (classical) parameter estimation, where they may either help or hinder depending on their nature [64].Yet far more work is needed to unify the theory of quantum ticking clocks with the standard framework of quantum parameter estimation, especially beyond the asymptotic regime of infinitely many samples (or in this case, ticks).
Our model is strongly motivated by modern electromechanical platforms, meaning that our predictions could be probed directly in state-of-the-art experiments.In the present work, we have limited ourselves to the quasi-adiabatic regime, which is often the most experimentally relevant one and is also the simplest regime to treat theoretically.However, the quasiadiabatic regime is very far from thermodynamically optimal, since many electrons tunnel through the device per tick of the clock.This regime is also sub-optimal from the perspective of readout, since the mechanical oscillations are "oversampled" by the tunnelling events [16].It is thus important to explore the complementary regime where each oscillation is sampled by only a handful of electrons, approaching the lower sensitivity limit of electrical readout and entailing thus a far smaller rate of dissipation.In this regime, fluctuations in the tunnelling time of individual electrons (e.g.see Ref. [65] and references therein) may become relevant for the clock's performance.At the level of theory and modelling, going beyond the quasi-adiabatic limit requires a non-perturbative methodology capable of tackling situations where the electromechanical coupling and the tunnelling rates are comparable.In future work, we plan to apply the mesoscopic-leads/extendedreservoir approach [66,67] to address this problem.
It is interesting to note that our model predicts clock accuracies (Fig. 6) that significantly surpass thermodynamic performance bounds that apply to classical clocks with a discrete state space [12].In particular, the thermodynamic uncertainty relation [17,18] limits the accuracy of such clocks to one half of the entropy production per tick, i.e.N ≤ Σ/2ν (assuming the ticks are generated by a renewal process) .The fact that our electromechanical clock can surpass these bounds is unsurprising: the dynamics of our clock is governed by an underdamped classical Langevin equation, which does not necessarily obey these bounds [31,68].Nevertheless, the approximations inherent to our model do neglect some potentially relevant contributions, which lead us to overestimate the accuracy and underestimate the entropy production.
First, our definition of the ticks is based on the quasiadiabatic current fluctuations quantified by C P (t).We have neglected the shot-noise contribution due to the discrete nature of the electronic charge, i.e.C Q (t).This additional source of white noise would partially obscure the precise time at which ticks occur.To improve tick identification in the presence of such noise, one could redefine the tick time to occur when the time derivative of the current, rather than the current itself, is at its maximum [50].Second, reading out relatively small currents would require amplification in practice, and we have ignored the additional dissipation associated with this measurement step.We note that the thermodynamic cost of the measurement has also been neglected in previous work on autonomous clocks [9,11,14,30].Understanding how this additional source of dissipation enters the balance of entropy production for timekeeping is an important open question for future work.
Finally, it would be interesting to explore the thermodynamics of timekeeping for other kinds of self-oscillators in the quantum regime.Conditions for self-oscillations to emerge from open quantum dynamics have only recently been established [69][70][71][72], within the broader context of research on broken time-translation symmetry [73][74][75].However, a general recipe to exploit such symmetry breaking for accurate timekeeping remains lacking.small quantities (Born approximation), (ii) assuming that the integrand decays rapidly as a function of t ′ , we approximate r(t − t ′ ) ≈ r(t) and extend the upper integration limit to infinity (Markov approximation).After tracing over the electronic degrees of freedom, therefore, the first two terms on the RHS of Eq. (A8) reduce to the Fokker-Planck generator where K(x, p, t) = tr K(x, p, t).The same Fokker-Planck operator dictates the evolution of the Wigner function, ∂W/∂t = L FP W, which can therefore be reconstructed by sampling the equivalent Langevin equation ( 14).However, unlike the Wigner function, the equation for K(x, p, t) has an additional inhomogenity deriving from the third term on the RHS of Eq. (A8), i.e.
Putting everything together, we obtain Eq. (25).Finally, we note that the above derivation holds for an arbitrary operator Î whose support is restricted to the electronic system in the Schrödinger picture (otherwise Eq. (A12) does not hold).
Appendix B: Modelling small amplitude and phase fluctuations near the limit cycle In this appendix, we derive a simple description of phase and amplitude fluctuations on the oscillator's limit cycle.Our starting point is the Langevin equation ( 14) governing the evolution of the oscillator.We first transform the equation to polar coordinates.Using the substitution zω 0 = v and converting the system to polar coordinates, x = Acosϕ, z = Asinϕ the equations of motion become Here we have used Itô's lemma, where dz 2 = D x ω 2 0 dt, dx 2 = 0 and dxdz = 0. Focusing first on the amplitude equation, the equation of motion is given as Here dw is a Wiener increment, and the subscript (A, ϕ) is used to highlight dependence on both amplitude and the phase.Assuming the amplitude change over a single cycle is small, we can replace the deterministic terms with their mean over a single oscillation and the stochastic term with its RMS over a single oscillation; Here dw A is an effective coarse-grained Wiener increment.Due to symmetry in the ⟨n⟩ x term, its average over a single cycle will be zero.We can find the limit cycle of the system by taking the expectation value of both sides of the equation and setting ∆E[A] ∆t = 0.The limit cycle is given by the solution of the transcendental equation We can also truncate Eq.B4 to lowest order for the dissipation and the fluctuation terms around the limit cycle.Using the substitutions Substituting these into the Eq.B4, one arrives at the much simpler equation This equation describes the amplitude of the system as an Ornstein-Uhlenbeck process [54].We can perform a similar analysis on the equation of motion for the phase: We will assume that the oscillator's natural frequency is the dominate term in the deterministic part of the equation.The phase evolution is given by dϕ Assuming sufficiently large ω 2 0 A/ √ D x we can linearise Eq. (B10) by replacing the stochastic term prefactor with its RMS over a single cycle so that dϕ = −ω 0 dt + D ϕ dw ϕ , (B11) .Ratio between the phase diffusion coefficient D ϕ and the amplitude damping rate γ A computed for small fluctuations away from the ideal limit cycle.The relatively small values indicate that phase coherence on the limit cycle is long-lived compared to the rate of amplitude variation.Parameters are the same as Fig. 2.
here dw ϕ is the effective Wiener increment originating from the linearisation.The diffusion term is given by Note that in the main text near Eqs.( 30) and (31) we have replaced ϕ → −ϕ merely for clarity of presentation.
Since the amplitude and phase noise are independent within our approximations, it is straightforward to compute the steady-state position autocorrelation function [54] E This function exhibits oscillatory decay at a rate that is ultimately dictated by the phase decoherence time, D −1 ϕ .This conclusion holds irrespectively of the scale, D A , and regression rate, γ A , of amplitude fluctuations.This point is discussed in more detail at the end of Appendix A.
To compare the characteristic timescales of amplitude and phase fluctuations that result from this description, we numerically compute the quantities γ A and D ϕ from Eqs. (B6) and (B12), respectively.Their ratio is plotted in Fig. 9 as a function of the voltage bias, for values of V above and beyond the threshold for self-oscillations.We see that for all voltages considered, we have D ϕ /4γ A ≪ 1, implying that phase coherence lasts longer than the characteristic timescale of amplitude variations.
To demonstrate this explicitly, we assume that the switching occurs via a simple telegraph process described by the master equation where λ i are transition rates between the sub-signals.The steady-state probabilities are given by The transition probability of switching from state i to state j i after a time t > 0 is given by the solution of Eq. (C8) with p i (0) = 1, i.e.
p j|i (t) = λ i λ 1 + λ 2 1 − e −(λ 1 +λ 2 )t .(C10) Substituting these results into Eq.(C7) gives the two-point correlation function of the telegraph process Therefore, the total correlation function [Eq.(C5)] is the sum of two terms: an oscillatory function determined by the C i (t), which decays due to amplitude and phase fluctuations within each state, and a pure exponential decay given by Eq. (C11).So long as the total switching rate λ 1 + λ 2 is slow compared to the decay of the C i (t), the latter contribution will survive after the oscillations have died out.This is precisely the qualitative behaviour above threshold seen in Fig. 3.This model also predicts a large zero-frequency component to the current noise, consistent with our observations in Sec.III A 3. From Eq. (34) one has [76] S(0) = 2 (C12) Therefore the switching term contributes a term that scales as which diverges as λ i → 0. For example, if λ 1 = λ 2 = λ then we simply have S(0) ∼ λ −1 , where λ −1 is the lifetime of each state.
Note that other models of slow amplitude fluctuations give rise to similar behaviour.However, it is crucial that the cycleaveraged signal depends on the amplitude, which is reflected here by the condition that I 1 I 2 .For example, the model of independent amplitude and phase fluctuations developed in Appendix B yields a position autocorrelation function given by Eq. (B13).This does not show a separation of timescales, even when the amplitude fluctuations are much slower than phase fluctuations, i.e.D A , γ A ≪ D ϕ .This is because the average of x(t) = A(t) cos ϕ(t) around one cycle is zero, independently of the value of A(t).
To illustrate the importance of an amplitude-dependent cycle average, let us instead consider a model of stochastic oscillations with an offset: where b and c are constants, and A(t) and ϕ(t) are described by an Ornstein-Uhlenbeck process [Eq.(30)] and a biased diffusion process [Eq.( 31)], respectively.The resulting stochastic dynamics is qualitatively similar to the behaviour seen in Fig. 2 in regions where the peak-to-peak amplitude is large, with c representing the maximum current and the scale factor b determining how much oscillations reduce the current on average.In particular, the cycle average of this signal is now amplitude dependent and given by E ϕ [I(t)] = c − bA(t).We find the autocorrelation function for this model is given by (C15) The first term above is identical to Eq. (B13), but now another term appears that depends only on the amplitude fluctuations.This latter term can be understood as a continuous analogue of Eq. (C6), where now b instead of I 1 − I 2 determines how amplitude fluctuations affect the cycle-averaged signal.

Appendix D: Allan variance for a renewal clock
In this appendix we derive the asymptotic Allan variance for a clock whose ticks are generated by a renewal process.Our derivation takes a different approach from that of Ref. [30], although the results are equivalent up to a rescaling.Let n(t) denote the number of ticks generated after a coordinate time t has elapsed.Let us assume that n(t) is a renewal process for which the mean and variance are well known [63] E [n(t)] =  On the second line, the first two terms are equal because the process is stationary.The third line then follows in the limit of large T because ȳ(t + T, T ) and ȳ(t, T ) become statistically independent, i.e. the number of renewals in time interval [t, t + T ] becomes independent of the number of renewals in the following time interval [t + T, t + 2T ].Therefore, the Allan variance for large T is (up to scale factors) nothing but the variance of, n(t + T ) − n(t), i.e. the number of renewals within a time interval of duration T .Anal-ogously to Eq. (D2), this result is known asymptotically to be Var [n(t + T ) − n(t)] = σ 2 T/µ 3 + O(T 0 ) [63].Finally, this yields the asymptotic Allan variance for a renewal process which is equivalent to Eq. ( 45).

Figure 1 .
Figure 1.Schematic of the setup.A central system, here a quantum dot, is coupled to two electrodes each at a temperature T , with the left electrode having a chemical potential of µ L and the right a chemical potential of µ R .The quantum dot is coupled to simple harmonic oscillator mode.Under an appropriate static voltage bias, the current flowing through the system drives self-sustained mechanical oscillations, which in turn induces an oscillatory component in the current.A series of ticks are recorded by counting oscillations in the output current.

Figure 4 .
Figure 4. Power spectrum of the output current near the oscillation frequency ω ≈ 2ω 0 , for four different voltage biases; all other parameters are the same as Fig. 2. The frequency-independent contribution from C Q (t) is small and only makes a significant contribution for smaller biases as shown in the inset.

Figure 5 .
Figure 5. Waiting time distributions of the ticks for different voltage biases, with other parameters the same as Fig. 2. Dots are the data from the numeric calculations, the solid curves are the lines of best fit to the inverse Gaussian distribution (39), with fitting parameters µ and σ.

Figure 6 .
Figure 6.Accuracy (red line), and resolution (blue line) for increasing bias voltage, plotted against the entropy produced per tick of the clock.Parameters are the same as Fig. 2.

Figure 7 .
Figure 7. Allan variance as a function of integration time T for various different voltage biases.Parameters are the same as Fig. 2.

Figure 8 .
Figure 8.The main panel shows the mutual information [Eq.(48)] between a tick and the m th subsequent tick, i.e. the pairwise correlations between the waiting times τ i and τ i+m .The inset shows the relative entropy [Eq.(47)] between the distribution of the n-tick clock reading P(t n ) and the n-fold convolution of single-tick WTD, which measures the total correlations among all n ticks.Parameters are the same as Fig.2.

Figure 9
Figure 9. Ratio between the phase diffusion coefficient D ϕ and the amplitude damping rate γ A computed for small fluctuations away from the ideal limit cycle.The relatively small values indicate that phase coherence on the limit cycle is long-lived compared to the rate of amplitude variation.Parameters are the same as Fig.2.