This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Energy-efficient quantum frequency estimation

, , , , and

Published 7 June 2018 © 2018 The Author(s). Published by IOP Publishing Ltd on behalf of Deutsche Physikalische Gesellschaft
, , Citation Pietro Liuzzo-Scorpo et al 2018 New J. Phys. 20 063009 DOI 10.1088/1367-2630/aac5b6

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/20/6/063009

Abstract

The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a 'GHZ-diagonal' state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy ${\boldsymbol{ \mathcal E }}$ is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more 'expensive' to prepare—are always preferred. Replacing ${\boldsymbol{ \mathcal E }}$ by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

While (classical) metrology is concerned with producing the most accurate estimate of some relevant parameter, quantum metrology is aimed at exploiting genuinely quantum traits to go beyond classical metrological limits [13]. Classically, there would be no difference between running some estimation protocol sequentially N times on one probe, and running the same protocol simultaneously on n (uncorrelated) copies of that probe for M = N/n rounds. Quantum-mechanically, however, such n-partite probe can be prepared in an entangled state, so that its estimation efficiency grows super-extensively3 . Here 'super-extensive' stands for faster-than-linear in the probe size, and the 'estimation efficiency' is proportional to the inverse of the mean squared error.

More precisely, under rather weak conditions, the statistical uncertainty of the estimate of some parameter $y=\bar{y}\pm \delta y$ may be tightly lower-bounded as $\delta y\geqslant 1/\sqrt{M{{\mathscr{F}}}_{y}({\boldsymbol{O}})}$ [10, 11], where ${{\mathscr{F}}}_{y}({\boldsymbol{O}})$ denotes the Fisher information of a sufficiently large number M of measurements of the observable ${\boldsymbol{O}}$ on the n-partite probe. Importantly—although often disregarded—the length M of the dataset used to build the estimate will always be capped by the limited availability of some essential resource ${\boldsymbol{ \mathcal R }};$ that is, if r is the amount of resource consumed per round, $M={\boldsymbol{ \mathcal R }}/r$ and hence, $\delta y\geqslant 1/\sqrt{{\boldsymbol{ \mathcal R }}{\eta }_{{\boldsymbol{ \mathcal R }}}}$, were ${\eta }_{{\boldsymbol{ \mathcal R }}}\equiv {{\mathscr{F}}}_{y}({\boldsymbol{O}})/r$ is the estimation efficiency. A scaling such as ${\eta }_{{\boldsymbol{ \mathcal R }}}\sim {n}^{c}$, with c > 1, would be the hallmark of quantum-enhanced sensing.

Although the unavoidable effects of environmental noise often cancel out any quantum advantage [1216], a super-extensive growth of the efficiency may still be attained under time-inhomogeneous phase-covariant noise [1720], and even more generic Ohmic dissipation [21], noise with a particular geometry [22, 23], or setups involving quantum error correction [2426].

For instance, when it comes to frequency estimation, the total running time ${\boldsymbol{ \mathcal T }}$ is usually regarded as the resource to be optimally partitioned [12]. Note that, even if features such as the amount of entanglement, coherence [27], or squeezing [28] in the initial state of the probe, or the internal interaction range among its constituents [47, 9] could all be regarded as legitimate metrological resources, these do not fit in our framework. That is, even if, e.g., the amount of entanglement in the preparation of an n-partite probe was severely limited in practice, this would not cap the number of rounds M of the estimation protocol—a fresh copy of the same entangled state would be supplied at the start of every iteration until either time, the overall number of probe constituents, or the available energy have been fully consumed.

In our case, we shall look precisely at the total energy consumed ${\boldsymbol{ \mathcal E }}$, and show that the notion of optimality that follows from the maximisation of an energy efficiency differs fundamentally from the one based solely on the portioning of the available time. In particular, while the maximisation of a time efficiency encourages the use of multipartite entangled probes with n as large as possible, energetic considerations advice against it—the high costs associated with the creation and manipulation of large multipartite correlated states does not pay off from the metrological viewpoint. In this way, we put into qualitative terms the intuitive notion that multi-particle entanglement-enabled metrology may not always be practical [29].

In particular, as illustrated in figure 1, we consider an ensemble of n initially thermal two-level atoms that are brought, through a sequence of qubit gates, into a sensitive GHZ-diagonal state [30] (see section 2.1). Such entangled probe is left to evolve freely under the action of time-non-local covariant noise. Specifically, we resort to a phenomenological quantum master equation [3133] which explicitly accounts for memory effects and gives rise to a non-divisible dissipative dynamics [33] (see section 2.2 for full details). We then devise a measurement protocol consisting of a sequence of qubit gates followed by an energy measurement (see section 2.3). We further provide the specific measurement setting for which this scheme becomes optimal for frequency estimation in a suitable parameter range (see section 2.4). By looking at the changes in the average energy of the probe during the preparation and measurement stages, we explicitly obtain the total energetic cost per round. We find that adjusting the free evolution time so as to maximise the time efficiency of the protocol does lead to a super-extensive scaling in the probe size; specifically n3/2 or 'Zeno scaling' [18, 19]. In contrast, the energy efficiency of the very same probe, decays monotonically with n, even when the time is chosen to maximise it (see section 3).

Figure 1.

Figure 1. Circuit representation of the (a) preparation, (b) free evolution, and (c) readout stages of our estimation protocol, as discussed in the main text. (a) A probe system composed of 1 control (c) qubit and n − 1 register (r) qubits, initially in a thermal state ${{\boldsymbol{\rho }}}_{0}$, is prepared into a GHZ-diagonal state ${{\boldsymbol{\rho }}}_{3}$ by a sequence of CNOT, Hadamard [H], and CNOT gates. (b) The system is left to evolve freely for a time t under a noisy environment according to a master equation with a memory kernel; this amounts to the action of the phase-covariant channel Λ, which imprints a phase ϕ = ω t on the qubits while inducing dissipation effects, overall transforming the state of the system into ${{\boldsymbol{\rho }}}_{4}$. (c) A pre-measurement sequence of qubit rotations, CNOT gates, and a rotated Hadamard on the control qubit is applied, leading to the state ${{\boldsymbol{\rho }}}_{6};$ each rounded rectangle (ζ) indicates a single-qubit rotation by an angle ζ, described by the unitary ${{\rm{e}}}^{-{\rm{i}}\zeta {{\boldsymbol{\sigma }}}_{z}/2}$. The system is finally measured in the energy basis to estimate the frequency ω with optimal efficiency.

Standard image High-resolution image

Interestingly, note that the observed super-extensive growth of the time efficiency is attained while starting from thermal qubits that are prepared into a GHZ-diagonal state. In an accompanying article [34] the same super-extensive growth of the time efficiency is found for an arbitrary set of qubits prepared in a GHZ-diagonal state for frequency estimation in a noisy environment. The GHZ-diagonal state had been conjectured to be optimal for phase estimation with mixed probes in the absence of noise [30]. Here, we show that they lead to optimal scaling even in a noisy scenario. We also observe that, in our setting, memoryless 'Markovian' dissipative dynamics generally produces less efficient estimates, thus suggesting that memory effects might be beneficial for the energy efficiency of parameter estimation (see section 3).

2. Methods

2.1. Probe initialisation

The system of interest is an ensemble of n non-interacting two-level atoms thermalised at temperature T, whose frequency ω needs to be estimated. For simplicity of notation we shall set ℏ and the Boltzmann constant kB to 1 in all what follows. Each atom has a Hamiltonian ${\boldsymbol{h}}=\tfrac{\omega }{2}{{\boldsymbol{\sigma }}}_{z}$ and is initially in the state

Equation (1)

where the polarisation bias $\epsilon =\tanh \left(\tfrac{\omega }{2T}\right)$ so that ${\boldsymbol{\varrho }}\propto \exp (-{\boldsymbol{h}}/T)$, and ${{\boldsymbol{\sigma }}}_{z}$ denotes the z Pauli matrix. The global Hamiltonian is ${\boldsymbol{H}}=\tfrac{\omega }{2}{{\boldsymbol{J}}}_{z}$, where ${{\boldsymbol{J}}}_{z}={{\boldsymbol{\sigma }}}_{z}\otimes {{\mathbb{1}}}^{\otimes n-1}+{\mathbb{1}}\otimes {{\boldsymbol{\sigma }}}_{z}\otimes {{\mathbb{1}}}^{\otimes n-2}+\cdots +{{\mathbb{1}}}^{\otimes n-1}\otimes {{\boldsymbol{\sigma }}}_{z}$ and the total initial state is simply

Equation (2)

where we have labelled the first atom as c for 'control qubit' while the rest are tagged r, for 'register'.

We shall prepare our n-atom probe in a GHZ-diagonal state by means of a CNOT transformation, followed by a Hadamard gate and a further CNOT (see figure 1(a)) [30]. That is, we first apply the unitary ${| 0\rangle }_{c}{\langle 0| }_{c}\otimes {{\mathbb{1}}}^{\otimes n-1}+{| 1\rangle }_{c}{\langle 1| }_{c}\otimes {{\boldsymbol{\sigma }}}_{x}^{\otimes n-1}$ on ${{\boldsymbol{\rho }}}_{0}$. Introducing the denotation $\bar{{\boldsymbol{A}}}\equiv {{\boldsymbol{\sigma }}}_{x}{\boldsymbol{A}}{{\boldsymbol{\sigma }}}_{x}$, this yields

Equation (3)

Then, the Hadamard transformation ${{\boldsymbol{U}}}_{H}\equiv \tfrac{1}{\sqrt{2}}({{\boldsymbol{\sigma }}}_{x}+{{\boldsymbol{\sigma }}}_{z})\otimes {{\mathbb{1}}}^{n-1}$ acts solely on the control qubit:

Equation (4)

and finally, the second CNOT transformation leads to

Equation (5)

where the missing elements are just Hermitian conjugates of the opposite corners of each matrix. The resulting state will subsequently undergo dissipative evolution (see section 2.2) before being interrogated.

As we will see in section 2.2, our model of dissipation gives rise to phase-covariant dynamics. It is known that the mean squared error of frequency estimated with this type of noise can be tightly lower-bounded below the standard quantum limit [19, 20]. It was further shown that this bound is asymptotically saturable by using (pure) GHZ input states. On the other hand, (mixed) GHZ-diagonal states such as ${{\boldsymbol{\rho }}}_{3}$ were found to perform well—and conjectured to be optimal—in noiseless phase estimation with mixed probes [30]. In section 3 we will illustrate that the optimal 'Zeno scaling', introduced in [18, 19], can also be attained with such GHZ-diagonal states.

Even though in the present paper we will limit ourselves to GHZ-diagonal preparations, it seems interesting to compare the size scaling of the metrological performance of different preparations. One would certainly find that some preparations may allow for a more energy-efficient estimation than others at fixed probe size. Unfortunately, as we will see below, our calculations rely heavily on the simple analytical structure of GHZ-diagonal states undergoing phase-covariant dissipation. This makes it difficult to extrapolate our results to other initial states.

Finally, note that the energetic cost of this initialisation stage ${{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}={\rm{tr}}\,\{{\boldsymbol{H}}({{\boldsymbol{\rho }}}_{3}-{{\boldsymbol{\rho }}}_{0})\}$ is linear in the probe size and evaluates to

Equation (6)

At this point, one may wonder why do we not cool down probes to the ground state before starting the estimation protocol so as to work with pure rather than mixed states. This could certainly be done (e.g. by coherent feedback cooling), so long as the corresponding energy cost ${{\boldsymbol{ \mathcal E }}}_{{\rm{cool}}}$ is added to the total energetic bookkeeping—just like (6), ${{\boldsymbol{ \mathcal E }}}_{{\rm{cool}}}$ would scale linearly in n. Such cooling stage is anyway not essential, and we will keep it out of the picture in what follows, thus avoiding to model it explicitly.

2.2. Free evolution

2.2.1. Phenomenological master equation

In order to account for the environmental effects in our probe, we will assume that each atom evolves according to a time-non-local master equation (see figure 1(b)) with a phenomenological exponentially decaying memory kernel [31]. The reason for this choice is that the resulting dissipative dynamics is phase-covariant, as opposed to the one following from a more canonical setting, such as the spin-boson model [21, 35]. This will eventually allow us to establish a connection with known results in the literature [20]. Moreover, due to its simplicity, the model considered here can be solved exactly.

Specifically, we shall think of a generic scenario in which a two-level atom with Hamiltonian ${\boldsymbol{h}}$ interacts with a bath (${{\boldsymbol{H}}}_{B}$) through the interaction term ${{\boldsymbol{H}}}_{{\rm{int}}}$. In the interaction picture with respect to the free Hamiltonian ${{\boldsymbol{H}}}_{0}={\boldsymbol{h}}+{{\boldsymbol{H}}}_{B}$ (indicated with subindex I in what follows), our phenomenological equation would read

Equation (7)

with $f(t)\equiv \lambda {{\rm{e}}}^{-\lambda | t| }$ and where ${\boldsymbol{ \mathcal L }}$ denotes the Gorini–Kossakowski–Lindblad–Sudarshan (Markovian) generator [36, 37]

Equation (8)

Here $\{\cdot ,\cdot \}{}_{+}$ stands for anti-commutator, and the decay rates are Γωγ0 [1 + (eω/T − 1)−1] and Γω = eω/T Γω . Equation (7) comes with the advantage of explicitly introducing memory effects into the dynamics. Note, however, that one must be careful when dealing with master equations that lack a microscopic derivation [3840] as they often lead to unphysical results. In particular, equation (7) breaks positivity iff $\tfrac{{\gamma }_{0}}{\lambda \epsilon }\geqslant \tfrac{1}{4}$ [32]. Importantly, the thermal sate ${\boldsymbol{\varrho }}$ is the stationary point equation (7), which is, in turn, consistent with our choice of initial state in section 2.1.

At this point, one may still wonder why not to choose an arguably more realistic non-covariant noise model derived from first principles, as in [21]. It must be noted that—unlike in [21]—we need to know the explicit form of the time-evolved state for arbitrarily large probes. This is a prerequisite for gauging the energy cost of the measurement stage, and, eventually, assessing the asymptotic scaling of the overall estimation efficiency. A noise model lacking the 'niceties' of covariant channels not only does compromise our ability to analytically evolve the state of the probe, but is also likely to render our proposed measurement scheme sub-optimal. On the plus side, however, covariant dissipation follows quite naturally from generic noise models whenever the ubiquitous rotating-wave approximation is well justified [21, 35]. Furthermore, as it can be seen by comparing [20] with [34] and our results below, the details of the specific covariant dissipation model do not seem to affect the qualitative asymptotic features of the estimation protocol.

2.2.2. Connection to the damped Jaynes–Cummings model

The seemingly arbitrary choice of memory kernel in equation (7) may be justified by considering the damped Jaynes–Cummings model on resonance; that is, a two-level atom in an empty and leaky cavity. This setup can be effectively described by the Hamiltonian

Equation (9)

where ${\boldsymbol{B}}\equiv {\sum }_{\mu }{g}_{\mu }({{\boldsymbol{b}}}_{\mu }+{{\boldsymbol{b}}}_{\mu }^{\dagger })$ and the system-bath coupling constants gμ make up the Lorentzian spectral density $J(\omega )={\sum }_{\mu }{g}_{\mu }^{2}\,\delta (\omega -{\omega }_{\mu })=\tfrac{1}{2\pi }\tfrac{{\gamma }_{0}{\lambda }^{2}}{{(\omega -\omega )}^{2}+{\lambda }^{2}}$ [31, 35].

Assuming weak coupling, the use of a second-order Nakajima–Zwanzig master equation [35, 41, 42] is justified. This reads

Equation (10)

where the interaction picture Hamiltonian is ${{\boldsymbol{H}}}_{{\rm{JC}}}(t)={{\boldsymbol{\sigma }}}_{+}(t){\boldsymbol{B}}(t)+{{\boldsymbol{\sigma }}}_{-}(t){{\boldsymbol{B}}}^{\dagger }(t)$, with ${{\boldsymbol{\sigma }}}_{\pm }(t)={{\boldsymbol{\sigma }}}_{\pm }{{\rm{e}}}^{\pm {\rm{i}}\omega t}$ and ${\boldsymbol{B}}(t)={\sum }_{\mu }{g}_{\mu }({{\boldsymbol{b}}}_{\mu }{{\rm{e}}}^{-{\rm{i}}{\omega }_{\mu }t}+{{\boldsymbol{b}}}_{\mu }^{\dagger }{{\rm{e}}}^{{\rm{i}}{\omega }_{\mu }t})$. The state of the environment and the trace over its degrees of freedom are denoted by ${{\boldsymbol{\varrho }}}_{B}$ and ${\mathrm{tr}}_{B}$, respectively.

Combining equations (9) and (10) one arrives to a master equation with the same structure as (7) at zero temperature [35], in which the bath correlation function $\langle {\boldsymbol{B}}(t){{\boldsymbol{B}}}^{\dagger }(s)\rangle =\int \,{\rm{d}}\omega ^{\prime} J(\omega ^{\prime} ){{\rm{e}}}^{{\rm{i}}(\omega -\omega ^{\prime} )(t-s)}=\tfrac{{\gamma }_{0}\lambda }{2}{{\rm{e}}}^{-\lambda t}$ plays the role of the memory kernel. In spite of this remark, we emphasise that (7) remains a purely phenomenological equation, as the decay rates Γω are evaluated at arbitrary temperature T.

2.2.3. Dissipative dynamics as a phase-covariant channel

Alternatively, (7) can be brought into the Schrödinger picture and cast in the equivalent time-local form

Equation (11)

For the sake of completeness, we include here the time-dependent decay rates γ± (t) and γz(t) , derived in [33]

Equation (12)

Equation (13)

where ${\xi }_{R}(t)\equiv {{\rm{e}}}^{-\lambda t/2}\left[\tfrac{1}{\sqrt{1-4R}}\sinh \left(\tfrac{\lambda t}{2}\sqrt{1-4R}\right)+\cosh \left(\tfrac{\lambda t}{2}\sqrt{1-4R}\right)\right]$ and $R=\tfrac{{\gamma }_{0}}{\lambda \epsilon }$.

As argued in [20], the dissipative dynamics following from equations such as (11) can be cast a phase-covariant qubit channel ${\boldsymbol{\varrho }}(t)={\rm{\Lambda }}(t)[{\boldsymbol{\varrho }}(0)]$, i.e. a map such that ${\rm{\Lambda }}\,\circ \,{{\boldsymbol{ \mathcal U }}}_{\varphi }={{\boldsymbol{ \mathcal U }}}_{\varphi }\,\circ \,{\rm{\Lambda }}$, where ${{\boldsymbol{ \mathcal U }}}_{\varphi }{\boldsymbol{\varrho }}\equiv {{\rm{e}}}^{-{\rm{i}}{\boldsymbol{h}}\varphi }\varrho {{\rm{e}}}^{{\rm{i}}{\boldsymbol{h}}\varphi }$ and ' ◦ ' stands for channel composition. These maps can be parametrised as

Equation (14)

where the matrix ${\rm{\Lambda }}(t)$ acts on ${\mathsf{v}}(0)=(1,{\rm{tr}}\,\{{{\boldsymbol{\sigma }}}_{x}{\boldsymbol{\varrho }}(0)\},{\rm{tr}}\,\{{{\boldsymbol{\sigma }}}_{y}{\boldsymbol{\varrho }}(0)\},{\rm{tr}}\,\{{{\boldsymbol{\sigma }}}_{z}{\boldsymbol{\varrho }}(0)\})$ to yield ${\mathsf{v}}(t)={\rm{\Lambda }}(t){\mathsf{v}}(0)$, so that ${\boldsymbol{\varrho }}(t)=\tfrac{1}{2}({{\mathsf{v}}}_{1}(t){\mathbb{1}}+{{\mathsf{v}}}_{2}(t){{\boldsymbol{\sigma }}}_{x}+{{\mathsf{v}}}_{3}(t){{\boldsymbol{\sigma }}}_{y}+{{\mathsf{v}}}_{4}(t){{\boldsymbol{\sigma }}}_{z})$.

For the ensuing dynamics to be completely positive, one must have ${\eta }_{\parallel }(t)\pm \kappa (t)\leqslant 1$ and $1+{\eta }_{\parallel }(t)\geqslant \sqrt{4{\eta }_{\perp }^{2}(t)+{\kappa }^{2}(t)}$. Additionally, since the map describes the action of the environment, it should asymptotically bring the two-level atom back to thermal equilibrium. This entails $\kappa (\infty )=-\epsilon [1-{\eta }_{z}(\infty )]$.

Following [20] one readily finds that equation (7) corresponds to

Equation (15)

where α∈{∥, ⊥ } , ${A}_{\parallel }=\sqrt{1-4R}$, and ${A}_{\perp }=\sqrt{1-2R}$.

2.2.4. State of the probe after the noisy evolution

Having discussed the details of the noise model, let us explicitly write the time-evolved state ${{\boldsymbol{\rho }}}_{4}\equiv {\rm{\Lambda }}{[{{\boldsymbol{\rho }}}_{3}]}^{\otimes n}$ after the action of the channel of equations (14) and (15). Its application to a generic qubit state yields

Equation (16)

with ${\alpha }_{s}\equiv \tfrac{1}{2}(1+s{\eta }_{\parallel }+\kappa )$, ${\beta }_{s}\equiv \tfrac{1}{2}(1-s{\eta }_{\parallel }-\kappa )$, and φω t . As a result

Equation (17)

where we have dropped the explicit time dependence from the noise parameters for brevity. We shall not attach any energetic cost to this stage of the estimation protocol as it corresponds to free dissipative evolution.

2.3. Probe readout

Before the probe is interrogated, it will need to undergo a pre-measurement stage, consisting of sequence of three unitaries: first, each atom will be rotated by an angle ζ1 via ${{\boldsymbol{ \mathcal U }}}_{{\zeta }_{1}}^{\otimes n}$. Then, a CNOT transformation and the generalised Hadamard gate

Equation (18)

will be sequentially applied (see figure 1(c)). An energy measurement can then be performed on the probe in order to build the frequency estimate. As we shall argue in section 2.4 below, in the limit R ≪ 1 , the angles (ζ1, ζ2) may be chosen so that the statistical uncertainty of the resulting estimate is (nearly) minimal.

Let us thus obtain the probabilities associated with an energy measurement on the final state of the probe. The state after ${{\boldsymbol{ \mathcal U }}}_{{\zeta }_{1}}^{\otimes n-1}$ and the CNOT transformation reads

Equation (19)

where ϕω t + ζ1 , i.e. the action of ${{\boldsymbol{ \mathcal U }}}_{{\zeta }_{1}}^{\otimes n-1}$ amounts to replacing $\varphi \to \varphi +{\zeta }_{1}$ in (17).

It will be more convenient to cast ${{\boldsymbol{\rho }}}_{5}$ in an alternative form. To that end, note that ${\rm{\Lambda }}[{\boldsymbol{\varrho }}]={\alpha }_{-\epsilon }| 0\rangle \langle 0| +{\beta }_{-\epsilon }| 1\rangle \langle 1| $, whereas ${\rm{\Lambda }}{[{\boldsymbol{\varrho }}]}^{\otimes 2}$ = ${\alpha }_{-\epsilon }^{2}| 00\rangle \langle 11| $ + ${\alpha }_{-\epsilon }{\beta }_{-\epsilon }(| 01\rangle \langle 01| $ + $| 10\rangle \langle 10| )+{\beta }_{-\epsilon }^{2}| 11\rangle \langle 11| $. Generalising to an arbitrary power l yields

Equation (20)

where xl stands for the l-digit binary representation of x and h(x) denotes the number of non-zero digits in xl (i.e. its Hamming weight). In turn, ${\bar{x}}_{l}$ represents the bitwise negation of xl. Care must be taken not to confuse the scalar function $h(\cdot )$ with the single-atom Hamiltonian ${\boldsymbol{h}}$, nor the bitwise negation ${\bar{x}}_{l}$ with the map $\bar{{\boldsymbol{\varrho }}}={{\boldsymbol{\sigma }}}_{x}{\boldsymbol{\varrho }}{{\boldsymbol{\sigma }}}_{x}$.

Quantities such as ${\rm{\Lambda }}{[\bar{{\boldsymbol{\varrho }}}]}^{\otimes l}$, $\overline{{\rm{\Lambda }}[{\boldsymbol{\varrho }}]}$, and $\overline{{\rm{\Lambda }}[\bar{{\boldsymbol{\varrho }}}]}$ follow from equation (20) by making the replacements $-\epsilon \to \epsilon $, ${\alpha }_{-\epsilon }\to {\beta }_{-\epsilon }$, and ${\alpha }_{-\epsilon }\to {\beta }_{\epsilon }$, respectively, while ${{\boldsymbol{\sigma }}}_{x}^{\otimes l}| {\bar{x}}_{l}\rangle =| {x}_{l}\rangle $, and

Equation (21)

Putting together all the above and dropping the sub-indices l = n − 1 in the interest of a lighter notation yields

Equation (22)

with the definitions

Equation (23)

Similarly, the final state of the protocol (i.e. ${{\boldsymbol{\rho }}}_{6}={{\boldsymbol{U}}}_{H}({\zeta }_{2})\,{{\boldsymbol{\rho }}}_{5}\,{{\boldsymbol{U}}}_{H}^{\dagger }({\zeta }_{2})$) is

Equation (24)

where

Equation (25)

Therefore, a measurement of ${{\boldsymbol{\rho }}}_{6}$ in the energy basis $\{| 0\rangle \otimes | x\rangle ,| 1\rangle \otimes | x\rangle \}$ has the following associated probabilities

Equation (26)

where all eigenvectors with the same number of 1s (i.e. h(x)) on the register yield the same probability. Equation (26) will be used below to obtain a saturable lower bound on the mean squared error of the resulting frequency estimate.

We now look into the energetic cost of the pre-measurement stage ${{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}}={\boldsymbol{ \mathcal E }}({{\boldsymbol{\rho }}}_{6})-{\boldsymbol{ \mathcal E }}({{\boldsymbol{\rho }}}_{4})$. Let us re-write the system Hamiltonian in the same notation as equations (22) and (24). That is,

Equation (27)

Hence, ${\boldsymbol{ \mathcal E }}({{\boldsymbol{\rho }}}_{4})\equiv \mathrm{tr}\{{\boldsymbol{H}}{{\boldsymbol{\rho }}}_{4}\}$ writes as

Equation (28)

whereas

Equation (29)

where the sub-indices m indicate the Hamming weight m = h(x) of the argument x of the corresponding coefficients, i.e. cx and fx. At our optimal prescription (ζ1, ζ2) the pre-measurement energetic cost is always positive ${{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}}\gt 0$.

Note that we are deliberately leaving the projective part of the measurement out of our energetic bookkeeping. In some setups such as nuclear magnetic resonance, this could be justified, as projective measurements are mimicked by suitable rotations followed by free decay. In other cases it may be necessary to supplement ${{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}}$ with a 'projection cost' ${{\boldsymbol{ \mathcal E }}}_{{\rm{proj}}}$. Similarly, depending on the specific projection model, the sharp probabilities in equation (26) might need to be modified—a 'measurement apparatus' at some finite temperature would arguably introduce thermally distributed random bit flips during the readout, thus making the measurement noisy. Neither the potential extra cost nor the errors in the interrogation would qualitatively affect our results.

While very general models of projective measurement schemes, and thermodynamic analyses thereof, may be found in the literature (see e.g. [4349], just to mention some), it is not our intention to make generic statements about the energy efficiency of frequency estimation. Instead, we settle for showing how looking at the energetic aspect of parameter estimation in a specific example can in fact change dramatically the usual notions of metrological optimality.

2.4. 'Error bars' of the estimate

2.4.1. (Classical) Fisher information

Recall from section 1 that the mean squared error of a frequency estimate $\omega =\bar{\omega }\pm \delta \omega $ constructed from a sufficiently large number of measurements M of some generic observable ${\boldsymbol{O}}$, can be tightly lower-bounded as $\delta \omega \geqslant 1/\sqrt{M{{\mathscr{F}}}_{\omega }({\boldsymbol{O}})}$ [50], where ${{\mathscr{F}}}_{\omega }({\boldsymbol{O}})$ stands for the (classical) Fisher information. In our case, ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ can be readily computed from the probability distribution of an energy measurement on ${{\boldsymbol{\rho }}}_{6}$ (see equation (26)); namely as

Equation (30)

When evaluating these derivatives, one must bear in mind that $R=\tfrac{{\gamma }_{0}}{\lambda \epsilon }$ does depend on ω, as $\epsilon =\tanh \left(\tfrac{\omega }{2T}\right)$. However, in our model ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ may be well approximated by taking R and epsilon as constants, in the limit ≪ 1 . That is,

Equation (31)

For even n, the measurement setting $({\zeta }_{1},{\zeta }_{2})=(\tfrac{\pi }{2}-\bar{\omega }t,\tfrac{\pi }{2})$ maximises ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$, while for odd n, one needs to choose $({\zeta }_{1},{\zeta }_{2})=(\tfrac{\pi }{2}-\bar{\omega }t,0)$. Note that $\bar{\omega }$ should not be thought-of as a variable, but as the best available estimate of the atomic frequency at any given stage. As the knowledge about ω is refined, the value of $\bar{\omega }$ should be updated, and the measurement setting, adaptively modified. Although it may seem counter-intuitive, undoing the precession ${{\boldsymbol{ \mathcal U }}}_{\omega t}^{\otimes n}$ on all atoms after the free evolution, improves the sensitivity to small fluctuations of ω around its average $\bar{\omega }$ and thus, helps to reduce δω.

2.4.2. Optimality of the measurement scheme

We now answer the question of whether another observable ${\boldsymbol{O}}\ne {\boldsymbol{H}}$ may give a better frequency estimate by comparing ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ with the quantum Fisher information (QFI) ${F}_{\omega }={\sup }_{{\boldsymbol{O}}}{{\mathscr{F}}}_{\omega }({\boldsymbol{O}})$ [51, 52]. This can be computed from the state ${{\boldsymbol{\rho }}}_{4}$ right after the free evolution stage or, equivalently, from ${{\boldsymbol{\rho }}}_{5}$, as Fω is invariant under unitary transformations. The QFI is [53]

Equation (32)

where ${\nu }_{x}^{\pm }$ and $| {{\rm{\Xi }}}_{x}^{\pm }\rangle $ are the eigenvalues and eigenvectors of ${{\boldsymbol{\rho }}}_{5}$. Specifically, these are

Equation (33)

where ${{\rm{\Delta }}}_{x}\equiv \sqrt{{({a}_{x}-{b}_{x})}^{2}+4{c}_{x}^{2}}$. Once again, we place ourselves in the limit of small , and find that $\langle {{\rm{\Xi }}}_{x}^{\pm }| {\partial }_{\omega }{{\boldsymbol{\rho }}}_{5}| {{\rm{\Xi }}}_{x}^{\mp }\rangle =0$, and thus

Equation (34)

which exactly coincides with the maximum of equation (31). Therefore, our proposed measurement setting is indeed optimal for ≪1 . For arbitrary , however, Fω can be significantly larger than its limiting value (34). It may even be impossible to find a pair (ζ1, ζ2) so that ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})={F}_{\omega }$. Nevertheless, the exact ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ always coincides with (34) at ${\zeta }_{1}=\tfrac{\pi }{2}-\bar{\omega }t$ and ${\zeta }_{2}=\{\tfrac{\pi }{2},0\}$, even when this measurement setting is sub-optimal. This point is illustrated in figure 2(a).

Figure 2.

Figure 2. (a) Approximate ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ for small , as in equation (31), (dashed grey curve) and exact Fisher information (solid black curve), as compared with the approximate QFI of equation (34) (dashed grey line) and the exact QFI (solid black line). The angle ζ1 is set to ${\zeta }_{1}=\tfrac{\pi }{2}-\bar{\omega }t$. Note the intersection of the curves at the nearly optimal measurement setting ζ2 = 0 . (b) Optimal interrogation time ${t}_{\star }\sim {n}^{-1}$ as a function of the size of the probe n. In both plots $\omega =\bar{\omega }=1$, T  =  200, γ0 = 10−4 , λ = 5 ( = 0.2 ), and t  =  1. In (a), n  =  9.

Standard image High-resolution image

3. Results and discussion

Recall that, in our scheme, the number of data points M that enters the inequality $\delta \omega \geqslant 1/\sqrt{M{{\mathscr{F}}}_{\omega }({\boldsymbol{H}})}$ is limited by the available energy ${\boldsymbol{ \mathcal E }}$ as $M={\boldsymbol{ \mathcal E }}/({{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}+{{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}})$. We can thus define the energy efficiency

Equation (35)

Note that we use ${{\mathscr{F}}}_{\omega }({\boldsymbol{H}})$ and Fω indistinctly since, for ≪ 1 , the QFI becomes saturable with our optimal measurement prescriptions.

We will proceed to maximise ${\eta }_{{\boldsymbol{ \mathcal E }}}(t,n)$ in two steps: first, for given n, we shall find the optimal interrogation time ${t}_{\star }$. Then, we will look at the scaling of ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)$ with the probe size. From equations (6), (28), (29), and (34), ${t}_{\star }$ can be found numerically. As shown in figure 2(b) it has a power-law-like dependence on the probe size $\omega {t}_{\star }\propto {n}^{-c}$, where c ≲ 1 (for ≪ 1 ).

Let us place ourselves in the standard scenario, in which the total time ${\boldsymbol{ \mathcal T }}$ is the scarce resource to 'economise' on. As usual, we shall work in the limit ≪ 1 and denote the corresponding optimal sampling time by ${t}_{\star }^{\prime} $, respectively. In figure 3(a) we illustrate that ${\eta }_{{\boldsymbol{ \mathcal T }}}({t}_{\star }^{\prime} ,n)$ can scale super-extensively under our time-inhomogeneous dissipative dynamics—even if we start from (mixed) thermal probes. Specifically, we recover the Zeno scaling (δω)2 ∼ 1/n3/2 [18, 19].

Figure 3.

Figure 3. (a) Efficiency ${\eta }_{{\boldsymbol{ \mathcal T }}}(t{{\prime} }_{\star },n)={F}_{\omega }/t{{\prime} }_{\star }$ at the optimal interrogation time $t{{\prime} }_{\star }$ as a function of the probe size n, in the standard frequency estimation scenario of limited time ${\boldsymbol{ \mathcal T }}$. Note from the inset that, in spite of the fact that the probe is prepared in a mixed GHZ-diagonal state, the efficiency grows super-extensively, as $\tilde{\eta }({t}_{\star },n)\sim {n}^{3/2}$, which corresponds to Zeno scaling. (b) Energy efficiency ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)={F}_{\omega }/({{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}+{{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}})$ at the optimal interrogation time ${t}_{\star }$ as a function of the probe size n for the same parameters as (a). In this case, one roughly has ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)\sim {n}^{-1/3}$, i.e. from an energetic perspective, using large entangled probes yields no metrological advantage. In (c), we set n  =  2 and investigate how ${\eta }_{{\boldsymbol{ \mathcal E }}}$ at ${t}_{\star }$ decays as λ grows; that is, in our model, longer memory times yield more energy-efficient frequency estimation than purely Markovian dissipation. All parameters are the same as in figure 2.

Standard image High-resolution image

What figure 3(a) suggests is that, if a large number N of two-level atoms were available, it would be sensible to batch them together in an entangled GHZ-diagonal state and partition the available running time ${\boldsymbol{ \mathcal T }}$ into prepare-and-measure segments of length $t{{\prime} }_{\star }$—the larger the probe, the better the resulting estimate.

In contrast, figure 3(b) tells a completely different story: when adopting an entangled GHZ-diagonal preparation, the efficiency ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)$ decreases rapidly as the probe is scaled up in size (in this case ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)\sim {n}^{-1/3}$, although the exponent is non-universal). This is so because, while $({{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}+{{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}})\sim n$, the QFI exhibits a slower power-law-like growth. Hence, if there was a cap on the total available energy ${\boldsymbol{ \mathcal E }}$, one could produce a more accurate frequency estimate by manipulating the uncorrelated atoms locally rather than attempting to build such an 'expensive' entangled state. Our numerics show that this qualitative behaviour persists even if we move away from the regime of ≪ 1 and search for the measurement setting (ζ1, ζ2) and interrogation time ${t}_{\star }$ which jointly maximise ${\eta }_{{\boldsymbol{ \mathcal E }}}(t,{\zeta }_{1},{\zeta }_{2},n)={{\mathscr{F}}}_{\omega }({\boldsymbol{H}})/[{{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}+{{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}}({\zeta }_{1},{\zeta }_{2})]$.

Another natural question to ask in this setting is whether the environmental memory time plays any role in the energy efficiency of frequency estimation. In figure 3(c) we illustrate how ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)$ decays with λ at any given n. Recall from equation (7) that increasing λ corresponds to reducing the bath memory time, thus making the dissipation 'more Markovian'. Our setting thus showcases how memory effects in the dissipative dynamics can improve the performance of a specific parameter estimation task. Elucidating whether memory effects play an instrumental role in energy-efficient frequency estimation requires a more general analysis that we defer for future work.

4. Conclusions

We have studied the problem of noisy frequency estimation when the total available energy ${\boldsymbol{ \mathcal E }}$ is limited. In each round of our estimation protocol, an ensemble of n initially thermal two-level atoms is brought into a GHZ-diagonal form by means of a simple sequence of qubit gates. We quantified the energetic cost of the preparation stage ${{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}$ by looking at the ensuing increase in the average energy of the probe.

The system is then allowed to evolve freely under the effect of environmental noise. This is modelled by a phenomenological master equation with built-in memory effects, which gives rise to phase-covariant free dissipative dynamics.

After further qubit operations, an energy measurement is eventually performed on the probe. We showed that, in a suitable range of parameters, these operations can be chosen so as to globally minimise the statistical uncertainty of the final frequency estimate. We also provided the corresponding optimal measurement prescription explicitly. The cost associated with the (pre-)measurement stage ${{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}}$ can also be readily calculated from the change in the average energy of the probe, thus allowing for a comprehensive energetic bookkeeping in each round of the protocol.

We introduced the notion of energy efficiency of the estimation ${\eta }_{{\boldsymbol{ \mathcal E }}}={{\mathscr{F}}}_{\omega }({\boldsymbol{H}})/({{\boldsymbol{ \mathcal E }}}_{{\rm{init}}}+{{\boldsymbol{ \mathcal E }}}_{{\rm{meas}}})$ as a means to assess the overall performance of the estimation protocol when there is a cap on the total energy ${\boldsymbol{ \mathcal E }}$. We further found the optimal free evolution time ${t}_{\star }$ maximising ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)$, and noticed that preparing larger probes in entangled GHZ-diagonal states is always detrimental for the energy efficiency of frequency estimation.

In the standard scenario, one assumes that the most restrictive constraint is instead the limited running time ${\boldsymbol{ \mathcal T }}$ of the estimation protocol and resorts to the figure of merit ${\eta }_{{\boldsymbol{ \mathcal T }}}={{\mathscr{F}}}_{\omega }({\boldsymbol{H}})/t$. This grows monotonically with n when optimised over the free evolution time of the probe, thus suggesting that large multipartite entangled probes are, in principle, better. This is so because a figure of merit like ${\eta }_{{\boldsymbol{ \mathcal T }}}$ fails to capture how 'difficult' or 'costly' it may be to prepare those states in practice. Incorporating the energetic dimension to the performance assessment through our ${\eta }_{{\boldsymbol{ \mathcal E }}}$ may be the simplest way to quantitatively account for this 'difficultness'.

It is true that tracking the average energy changes of the probe may be a crude way of capturing the actual limitations in force in real metrological setups. Likewise, in many situations, the total time ${\boldsymbol{ \mathcal T }}$ might indeed place the most stringent limitation on the achievable precision, thus rendering other considerations irrelevant. Our observation merely highlights the importance of formulating quantifiers of the metrological efficiency that faithfully capture all the relevant constraints in place in each specific scenario.

We also showed that, at any probe size, ${\eta }_{{\boldsymbol{ \mathcal E }}}({t}_{\star },n)$ decays monotonically with the inverse bath memory time λ, hence suggesting that large bath correlation times might be a resource for energy-efficient frequency estimation. This point certainly deserves a deeper and more general investigation.

Our intended take-home message is that different assessments of resources lead to different notions of optimality. Hence, in order to produce practically useful metrological bounds, the stress should be placed on searching for those figures of merit capable of capturing the most stringent limitations at work in each experimental setup.

To conclude, it is important to remark that we did not optimise our energy efficiency over the initial state of the probe but rather, adopted the GHZ-diagonal preparation as a working assumption. The question of whether or not other forms of multipartite sharing of correlations could give rise to a more energetically favourable scaling remains open and certainly deserves further investigation.

Acknowledgments

We are thankful to A del Campo, K V Hovhannisyan, J Kołodyński, R Kosloff, K Macieszczak, M Mehboudi, J Oppenheim, R Nichols, N A Rodriguez-Briones, A Smirne, T Tufarelli, and R Uzdin for helpful comments. We gratefully acknowledge funding from the Royal Society under the International Exchanges Programme (Grant No. IE150570), the European Research Council under the StG GQCOP (Grant No. 637352), the Foundational Questions Institute (fqxi.org) under the Physics of the Observer Programme (Grant No. FQXi-RFP-1601), and the COST Action MP1209: 'Thermodynamics in the quantum regime'.

Footnotes

  • Further improvements may follow from setting up interactions within the probe [49], although such a scenario will not be considered in this paper.

Please wait… references are loading.