Abstract
The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a 'GHZ-diagonal' state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more 'expensive' to prepare—are always preferred. Replacing by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
While (classical) metrology is concerned with producing the most accurate estimate of some relevant parameter, quantum metrology is aimed at exploiting genuinely quantum traits to go beyond classical metrological limits [1–3]. Classically, there would be no difference between running some estimation protocol sequentially N times on one probe, and running the same protocol simultaneously on n (uncorrelated) copies of that probe for M = N/n rounds. Quantum-mechanically, however, such n-partite probe can be prepared in an entangled state, so that its estimation efficiency grows super-extensively3 . Here 'super-extensive' stands for faster-than-linear in the probe size, and the 'estimation efficiency' is proportional to the inverse of the mean squared error.
More precisely, under rather weak conditions, the statistical uncertainty of the estimate of some parameter may be tightly lower-bounded as [10, 11], where denotes the Fisher information of a sufficiently large number M of measurements of the observable on the n-partite probe. Importantly—although often disregarded—the length M of the dataset used to build the estimate will always be capped by the limited availability of some essential resource that is, if r is the amount of resource consumed per round, and hence, , were is the estimation efficiency. A scaling such as , with c > 1, would be the hallmark of quantum-enhanced sensing.
Although the unavoidable effects of environmental noise often cancel out any quantum advantage [12–16], a super-extensive growth of the efficiency may still be attained under time-inhomogeneous phase-covariant noise [17–20], and even more generic Ohmic dissipation [21], noise with a particular geometry [22, 23], or setups involving quantum error correction [24–26].
For instance, when it comes to frequency estimation, the total running time is usually regarded as the resource to be optimally partitioned [12]. Note that, even if features such as the amount of entanglement, coherence [27], or squeezing [28] in the initial state of the probe, or the internal interaction range among its constituents [4–7, 9] could all be regarded as legitimate metrological resources, these do not fit in our framework. That is, even if, e.g., the amount of entanglement in the preparation of an n-partite probe was severely limited in practice, this would not cap the number of rounds M of the estimation protocol—a fresh copy of the same entangled state would be supplied at the start of every iteration until either time, the overall number of probe constituents, or the available energy have been fully consumed.
In our case, we shall look precisely at the total energy consumed , and show that the notion of optimality that follows from the maximisation of an energy efficiency differs fundamentally from the one based solely on the portioning of the available time. In particular, while the maximisation of a time efficiency encourages the use of multipartite entangled probes with n as large as possible, energetic considerations advice against it—the high costs associated with the creation and manipulation of large multipartite correlated states does not pay off from the metrological viewpoint. In this way, we put into qualitative terms the intuitive notion that multi-particle entanglement-enabled metrology may not always be practical [29].
In particular, as illustrated in figure 1, we consider an ensemble of n initially thermal two-level atoms that are brought, through a sequence of qubit gates, into a sensitive GHZ-diagonal state [30] (see section 2.1). Such entangled probe is left to evolve freely under the action of time-non-local covariant noise. Specifically, we resort to a phenomenological quantum master equation [31–33] which explicitly accounts for memory effects and gives rise to a non-divisible dissipative dynamics [33] (see section 2.2 for full details). We then devise a measurement protocol consisting of a sequence of qubit gates followed by an energy measurement (see section 2.3). We further provide the specific measurement setting for which this scheme becomes optimal for frequency estimation in a suitable parameter range (see section 2.4). By looking at the changes in the average energy of the probe during the preparation and measurement stages, we explicitly obtain the total energetic cost per round. We find that adjusting the free evolution time so as to maximise the time efficiency of the protocol does lead to a super-extensive scaling in the probe size; specifically n3/2 or 'Zeno scaling' [18, 19]. In contrast, the energy efficiency of the very same probe, decays monotonically with n, even when the time is chosen to maximise it (see section 3).
Interestingly, note that the observed super-extensive growth of the time efficiency is attained while starting from thermal qubits that are prepared into a GHZ-diagonal state. In an accompanying article [34] the same super-extensive growth of the time efficiency is found for an arbitrary set of qubits prepared in a GHZ-diagonal state for frequency estimation in a noisy environment. The GHZ-diagonal state had been conjectured to be optimal for phase estimation with mixed probes in the absence of noise [30]. Here, we show that they lead to optimal scaling even in a noisy scenario. We also observe that, in our setting, memoryless 'Markovian' dissipative dynamics generally produces less efficient estimates, thus suggesting that memory effects might be beneficial for the energy efficiency of parameter estimation (see section 3).
2. Methods
2.1. Probe initialisation
The system of interest is an ensemble of n non-interacting two-level atoms thermalised at temperature T, whose frequency ω needs to be estimated. For simplicity of notation we shall set ℏ and the Boltzmann constant kB to 1 in all what follows. Each atom has a Hamiltonian and is initially in the state
where the polarisation bias so that , and denotes the z Pauli matrix. The global Hamiltonian is , where and the total initial state is simply
where we have labelled the first atom as c for 'control qubit' while the rest are tagged r, for 'register'.
We shall prepare our n-atom probe in a GHZ-diagonal state by means of a CNOT transformation, followed by a Hadamard gate and a further CNOT (see figure 1(a)) [30]. That is, we first apply the unitary on . Introducing the denotation , this yields
Then, the Hadamard transformation acts solely on the control qubit:
and finally, the second CNOT transformation leads to
where the missing elements are just Hermitian conjugates of the opposite corners of each matrix. The resulting state will subsequently undergo dissipative evolution (see section 2.2) before being interrogated.
As we will see in section 2.2, our model of dissipation gives rise to phase-covariant dynamics. It is known that the mean squared error of frequency estimated with this type of noise can be tightly lower-bounded below the standard quantum limit [19, 20]. It was further shown that this bound is asymptotically saturable by using (pure) GHZ input states. On the other hand, (mixed) GHZ-diagonal states such as were found to perform well—and conjectured to be optimal—in noiseless phase estimation with mixed probes [30]. In section 3 we will illustrate that the optimal 'Zeno scaling', introduced in [18, 19], can also be attained with such GHZ-diagonal states.
Even though in the present paper we will limit ourselves to GHZ-diagonal preparations, it seems interesting to compare the size scaling of the metrological performance of different preparations. One would certainly find that some preparations may allow for a more energy-efficient estimation than others at fixed probe size. Unfortunately, as we will see below, our calculations rely heavily on the simple analytical structure of GHZ-diagonal states undergoing phase-covariant dissipation. This makes it difficult to extrapolate our results to other initial states.
Finally, note that the energetic cost of this initialisation stage is linear in the probe size and evaluates to
At this point, one may wonder why do we not cool down probes to the ground state before starting the estimation protocol so as to work with pure rather than mixed states. This could certainly be done (e.g. by coherent feedback cooling), so long as the corresponding energy cost is added to the total energetic bookkeeping—just like (6), would scale linearly in n. Such cooling stage is anyway not essential, and we will keep it out of the picture in what follows, thus avoiding to model it explicitly.
2.2. Free evolution
2.2.1. Phenomenological master equation
In order to account for the environmental effects in our probe, we will assume that each atom evolves according to a time-non-local master equation (see figure 1(b)) with a phenomenological exponentially decaying memory kernel [31]. The reason for this choice is that the resulting dissipative dynamics is phase-covariant, as opposed to the one following from a more canonical setting, such as the spin-boson model [21, 35]. This will eventually allow us to establish a connection with known results in the literature [20]. Moreover, due to its simplicity, the model considered here can be solved exactly.
Specifically, we shall think of a generic scenario in which a two-level atom with Hamiltonian interacts with a bath () through the interaction term . In the interaction picture with respect to the free Hamiltonian (indicated with subindex I in what follows), our phenomenological equation would read
with and where denotes the Gorini–Kossakowski–Lindblad–Sudarshan (Markovian) generator [36, 37]
Here stands for anti-commutator, and the decay rates are Γω ≡ γ0 [1 + (eω/T − 1)−1] and Γ−ω = e−ω/T Γω . Equation (7) comes with the advantage of explicitly introducing memory effects into the dynamics. Note, however, that one must be careful when dealing with master equations that lack a microscopic derivation [38–40] as they often lead to unphysical results. In particular, equation (7) breaks positivity iff [32]. Importantly, the thermal sate is the stationary point equation (7), which is, in turn, consistent with our choice of initial state in section 2.1.
At this point, one may still wonder why not to choose an arguably more realistic non-covariant noise model derived from first principles, as in [21]. It must be noted that—unlike in [21]—we need to know the explicit form of the time-evolved state for arbitrarily large probes. This is a prerequisite for gauging the energy cost of the measurement stage, and, eventually, assessing the asymptotic scaling of the overall estimation efficiency. A noise model lacking the 'niceties' of covariant channels not only does compromise our ability to analytically evolve the state of the probe, but is also likely to render our proposed measurement scheme sub-optimal. On the plus side, however, covariant dissipation follows quite naturally from generic noise models whenever the ubiquitous rotating-wave approximation is well justified [21, 35]. Furthermore, as it can be seen by comparing [20] with [34] and our results below, the details of the specific covariant dissipation model do not seem to affect the qualitative asymptotic features of the estimation protocol.
2.2.2. Connection to the damped Jaynes–Cummings model
The seemingly arbitrary choice of memory kernel in equation (7) may be justified by considering the damped Jaynes–Cummings model on resonance; that is, a two-level atom in an empty and leaky cavity. This setup can be effectively described by the Hamiltonian
where and the system-bath coupling constants gμ make up the Lorentzian spectral density [31, 35].
Assuming weak coupling, the use of a second-order Nakajima–Zwanzig master equation [35, 41, 42] is justified. This reads
where the interaction picture Hamiltonian is , with and . The state of the environment and the trace over its degrees of freedom are denoted by and , respectively.
Combining equations (9) and (10) one arrives to a master equation with the same structure as (7) at zero temperature [35], in which the bath correlation function plays the role of the memory kernel. In spite of this remark, we emphasise that (7) remains a purely phenomenological equation, as the decay rates Γω are evaluated at arbitrary temperature T.
2.2.3. Dissipative dynamics as a phase-covariant channel
Alternatively, (7) can be brought into the Schrödinger picture and cast in the equivalent time-local form
For the sake of completeness, we include here the time-dependent decay rates γ± (t) and γz(t) , derived in [33]
where and .
As argued in [20], the dissipative dynamics following from equations such as (11) can be cast a phase-covariant qubit channel , i.e. a map such that , where and ' ◦ ' stands for channel composition. These maps can be parametrised as
where the matrix acts on to yield , so that .
For the ensuing dynamics to be completely positive, one must have and . Additionally, since the map describes the action of the environment, it should asymptotically bring the two-level atom back to thermal equilibrium. This entails .
Following [20] one readily finds that equation (7) corresponds to
where α∈{∥, ⊥ } , , and .
2.2.4. State of the probe after the noisy evolution
Having discussed the details of the noise model, let us explicitly write the time-evolved state after the action of the channel of equations (14) and (15). Its application to a generic qubit state yields
with , , and φ ≡ ω t . As a result
where we have dropped the explicit time dependence from the noise parameters for brevity. We shall not attach any energetic cost to this stage of the estimation protocol as it corresponds to free dissipative evolution.
2.3. Probe readout
Before the probe is interrogated, it will need to undergo a pre-measurement stage, consisting of sequence of three unitaries: first, each atom will be rotated by an angle ζ1 via . Then, a CNOT transformation and the generalised Hadamard gate
will be sequentially applied (see figure 1(c)). An energy measurement can then be performed on the probe in order to build the frequency estimate. As we shall argue in section 2.4 below, in the limit R ≪ 1 , the angles (ζ1, ζ2) may be chosen so that the statistical uncertainty of the resulting estimate is (nearly) minimal.
Let us thus obtain the probabilities associated with an energy measurement on the final state of the probe. The state after and the CNOT transformation reads
where ϕ ≡ ω t + ζ1 , i.e. the action of amounts to replacing in (17).
It will be more convenient to cast in an alternative form. To that end, note that , whereas = + + . Generalising to an arbitrary power l yields
where xl stands for the l-digit binary representation of x and h(x) denotes the number of non-zero digits in xl (i.e. its Hamming weight). In turn, represents the bitwise negation of xl. Care must be taken not to confuse the scalar function with the single-atom Hamiltonian , nor the bitwise negation with the map .
Quantities such as , , and follow from equation (20) by making the replacements , , and , respectively, while , and
Putting together all the above and dropping the sub-indices l = n − 1 in the interest of a lighter notation yields
with the definitions
Similarly, the final state of the protocol (i.e. ) is
where
Therefore, a measurement of in the energy basis has the following associated probabilities
where all eigenvectors with the same number of 1s (i.e. h(x)) on the register yield the same probability. Equation (26) will be used below to obtain a saturable lower bound on the mean squared error of the resulting frequency estimate.
We now look into the energetic cost of the pre-measurement stage . Let us re-write the system Hamiltonian in the same notation as equations (22) and (24). That is,
Hence, writes as
whereas
where the sub-indices m indicate the Hamming weight m = h(x) of the argument x of the corresponding coefficients, i.e. cx and fx. At our optimal prescription (ζ1, ζ2) the pre-measurement energetic cost is always positive .
Note that we are deliberately leaving the projective part of the measurement out of our energetic bookkeeping. In some setups such as nuclear magnetic resonance, this could be justified, as projective measurements are mimicked by suitable rotations followed by free decay. In other cases it may be necessary to supplement with a 'projection cost' . Similarly, depending on the specific projection model, the sharp probabilities in equation (26) might need to be modified—a 'measurement apparatus' at some finite temperature would arguably introduce thermally distributed random bit flips during the readout, thus making the measurement noisy. Neither the potential extra cost nor the errors in the interrogation would qualitatively affect our results.
While very general models of projective measurement schemes, and thermodynamic analyses thereof, may be found in the literature (see e.g. [43–49], just to mention some), it is not our intention to make generic statements about the energy efficiency of frequency estimation. Instead, we settle for showing how looking at the energetic aspect of parameter estimation in a specific example can in fact change dramatically the usual notions of metrological optimality.
2.4. 'Error bars' of the estimate
2.4.1. (Classical) Fisher information
Recall from section 1 that the mean squared error of a frequency estimate constructed from a sufficiently large number of measurements M of some generic observable , can be tightly lower-bounded as [50], where stands for the (classical) Fisher information. In our case, can be readily computed from the probability distribution of an energy measurement on (see equation (26)); namely as
When evaluating these derivatives, one must bear in mind that does depend on ω, as . However, in our model may be well approximated by taking R and as constants, in the limit Rλ ≪ 1 . That is,
For even n, the measurement setting maximises , while for odd n, one needs to choose . Note that should not be thought-of as a variable, but as the best available estimate of the atomic frequency at any given stage. As the knowledge about ω is refined, the value of should be updated, and the measurement setting, adaptively modified. Although it may seem counter-intuitive, undoing the precession on all atoms after the free evolution, improves the sensitivity to small fluctuations of ω around its average and thus, helps to reduce δω.
2.4.2. Optimality of the measurement scheme
We now answer the question of whether another observable may give a better frequency estimate by comparing with the quantum Fisher information (QFI) [51, 52]. This can be computed from the state right after the free evolution stage or, equivalently, from , as Fω is invariant under unitary transformations. The QFI is [53]
where and are the eigenvalues and eigenvectors of . Specifically, these are
where . Once again, we place ourselves in the limit of small Rλ , and find that , and thus
which exactly coincides with the maximum of equation (31). Therefore, our proposed measurement setting is indeed optimal for Rλ≪1 . For arbitrary Rλ , however, Fω can be significantly larger than its limiting value (34). It may even be impossible to find a pair (ζ1, ζ2) so that . Nevertheless, the exact always coincides with (34) at and , even when this measurement setting is sub-optimal. This point is illustrated in figure 2(a).
Download figure:
Standard image High-resolution image3. Results and discussion
Recall that, in our scheme, the number of data points M that enters the inequality is limited by the available energy as . We can thus define the energy efficiency
Note that we use and Fω indistinctly since, for Rλ ≪ 1 , the QFI becomes saturable with our optimal measurement prescriptions.
We will proceed to maximise in two steps: first, for given n, we shall find the optimal interrogation time . Then, we will look at the scaling of with the probe size. From equations (6), (28), (29), and (34), can be found numerically. As shown in figure 2(b) it has a power-law-like dependence on the probe size , where c ≲ 1 (for Rλ ≪ 1 ).
Let us place ourselves in the standard scenario, in which the total time is the scarce resource to 'economise' on. As usual, we shall work in the limit Rλ ≪ 1 and denote the corresponding optimal sampling time by , respectively. In figure 3(a) we illustrate that can scale super-extensively under our time-inhomogeneous dissipative dynamics—even if we start from (mixed) thermal probes. Specifically, we recover the Zeno scaling (δω)2 ∼ 1/n3/2 [18, 19].
Download figure:
Standard image High-resolution imageWhat figure 3(a) suggests is that, if a large number N of two-level atoms were available, it would be sensible to batch them together in an entangled GHZ-diagonal state and partition the available running time into prepare-and-measure segments of length —the larger the probe, the better the resulting estimate.
In contrast, figure 3(b) tells a completely different story: when adopting an entangled GHZ-diagonal preparation, the efficiency decreases rapidly as the probe is scaled up in size (in this case , although the exponent is non-universal). This is so because, while , the QFI exhibits a slower power-law-like growth. Hence, if there was a cap on the total available energy , one could produce a more accurate frequency estimate by manipulating the uncorrelated atoms locally rather than attempting to build such an 'expensive' entangled state. Our numerics show that this qualitative behaviour persists even if we move away from the regime of Rλ ≪ 1 and search for the measurement setting (ζ1, ζ2) and interrogation time which jointly maximise .
Another natural question to ask in this setting is whether the environmental memory time plays any role in the energy efficiency of frequency estimation. In figure 3(c) we illustrate how decays with λ at any given n. Recall from equation (7) that increasing λ corresponds to reducing the bath memory time, thus making the dissipation 'more Markovian'. Our setting thus showcases how memory effects in the dissipative dynamics can improve the performance of a specific parameter estimation task. Elucidating whether memory effects play an instrumental role in energy-efficient frequency estimation requires a more general analysis that we defer for future work.
4. Conclusions
We have studied the problem of noisy frequency estimation when the total available energy is limited. In each round of our estimation protocol, an ensemble of n initially thermal two-level atoms is brought into a GHZ-diagonal form by means of a simple sequence of qubit gates. We quantified the energetic cost of the preparation stage by looking at the ensuing increase in the average energy of the probe.
The system is then allowed to evolve freely under the effect of environmental noise. This is modelled by a phenomenological master equation with built-in memory effects, which gives rise to phase-covariant free dissipative dynamics.
After further qubit operations, an energy measurement is eventually performed on the probe. We showed that, in a suitable range of parameters, these operations can be chosen so as to globally minimise the statistical uncertainty of the final frequency estimate. We also provided the corresponding optimal measurement prescription explicitly. The cost associated with the (pre-)measurement stage can also be readily calculated from the change in the average energy of the probe, thus allowing for a comprehensive energetic bookkeeping in each round of the protocol.
We introduced the notion of energy efficiency of the estimation as a means to assess the overall performance of the estimation protocol when there is a cap on the total energy . We further found the optimal free evolution time maximising , and noticed that preparing larger probes in entangled GHZ-diagonal states is always detrimental for the energy efficiency of frequency estimation.
In the standard scenario, one assumes that the most restrictive constraint is instead the limited running time of the estimation protocol and resorts to the figure of merit . This grows monotonically with n when optimised over the free evolution time of the probe, thus suggesting that large multipartite entangled probes are, in principle, better. This is so because a figure of merit like fails to capture how 'difficult' or 'costly' it may be to prepare those states in practice. Incorporating the energetic dimension to the performance assessment through our may be the simplest way to quantitatively account for this 'difficultness'.
It is true that tracking the average energy changes of the probe may be a crude way of capturing the actual limitations in force in real metrological setups. Likewise, in many situations, the total time might indeed place the most stringent limitation on the achievable precision, thus rendering other considerations irrelevant. Our observation merely highlights the importance of formulating quantifiers of the metrological efficiency that faithfully capture all the relevant constraints in place in each specific scenario.
We also showed that, at any probe size, decays monotonically with the inverse bath memory time λ, hence suggesting that large bath correlation times might be a resource for energy-efficient frequency estimation. This point certainly deserves a deeper and more general investigation.
Our intended take-home message is that different assessments of resources lead to different notions of optimality. Hence, in order to produce practically useful metrological bounds, the stress should be placed on searching for those figures of merit capable of capturing the most stringent limitations at work in each experimental setup.
To conclude, it is important to remark that we did not optimise our energy efficiency over the initial state of the probe but rather, adopted the GHZ-diagonal preparation as a working assumption. The question of whether or not other forms of multipartite sharing of correlations could give rise to a more energetically favourable scaling remains open and certainly deserves further investigation.
Acknowledgments
We are thankful to A del Campo, K V Hovhannisyan, J Kołodyński, R Kosloff, K Macieszczak, M Mehboudi, J Oppenheim, R Nichols, N A Rodriguez-Briones, A Smirne, T Tufarelli, and R Uzdin for helpful comments. We gratefully acknowledge funding from the Royal Society under the International Exchanges Programme (Grant No. IE150570), the European Research Council under the StG GQCOP (Grant No. 637352), the Foundational Questions Institute (fqxi.org) under the Physics of the Observer Programme (Grant No. FQXi-RFP-1601), and the COST Action MP1209: 'Thermodynamics in the quantum regime'.
Footnotes
- 3