This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Introducing one-shot work into fluctuation relations

, , and

Published 11 September 2015 © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Focus on Quantum Thermodynamics Citation Nicole Yunger Halpern et al 2015 New J. Phys. 17 095003 DOI 10.1088/1367-2630/17/9/095003

1367-2630/17/9/095003

Abstract

Two approaches to small-scale and quantum thermodynamics are fluctuation relations and one-shot statistical mechanics. Fluctuation relations (such as Crooks' theorem and Jarzynski's equality) relate nonequilibrium behaviors to equilibrium quantities such as free energy. One-shot statistical mechanics involves statements about every run of an experiment, not just about averages over trials. We investigate the relation between the two approaches. We show that both approaches feature the same notions of work and the same notions of probability distributions over possible work values. The two approaches are alternative toolkits with which to analyze these distributions. To combine the toolkits, we show how one-shot work quantities can be defined and bounded in contexts governed by Crooks' theorem. These bounds provide a new bridge from one-shot theory to experiments originally designed for testing fluctuation theorems.

Export citation and abstract BibTeX RIS

1. Introduction

The probabilistic nature of thermalization prevents us from deterministically predicting the amount of work performed on a system in any given run of an experiment. This stochasticity necessitates a statistical treatment of work, especially when the deviation from the mean value of work is large. Two popular frameworks employed for this purpose are fluctuation theorems [115] and one-shot statistical mechanics [1623]. The former framework's purpose is to quantify the behaviors of nonequilibrium classical and quantum systems. The latter framework concerns statements true of every trial (realization) of an experiment.

The relationship between these frameworks has been unclear (though work by Åberg [19] suggests that a connection could be fruitful). We will demonstrate that these approaches are not competitors. Rather, the approaches are mutually compatible tools. Combined, they describe general thermal behaviors of small classical and quantum systems.

We will begin with a technical introduction to fluctuation theorems and one-shot statistical mechanics. We then present our main claim: that one-shot statistical mechanics can be applied to settings governed by fluctuation theorems (see figure 1). We substantiate this claim by generalizing the characteristic functions of the work probability distributions for classical and quantum systems. From this generalization, we derive bounds on one-shot work quantities in settings governed by fluctuation theorems. We demonstrate how this generalization can be employed in two mathematical formalisms: a work-extraction game [18, 19] and thermodynamic resource theories [20, 2226]. To conclude with two pedagogical examples, we apply the generalization to specific fluctuation settings: Landauer bit reset [18, 19, 24, 2729] and experimental DNA unzipping [3032]. The examples illustrate the opportunity to test one-shot results with experiments devised originally for fluctuation theorems.

Figure 1.

Figure 1. A synopsis of how our results relate to Crooks' theorem. Crooks' fluctuation theorem links a probability distribution ${P}_{\mathrm{fwd}}(W)$ over work expended during one process to the distribution ${P}_{\mathrm{rev}}(-W)$ over the work recouped during the reverse process. One-shot statistical mechanics concerns functions of probability distributions, such as Rényi entropies. We demonstrate how one-shot tools can be applied to problems governed by Crooks' theorem. This application allows us to calculate one-shot properties of the forward protocol from properties of the reverse, without the need to profile entire probability distributions.

Standard image High-resolution image

2. Preliminaries

2.1. Fluctuation theorems

Consider a classical system that is coupled to a heat bath and driven externally. Due to the probabilistic nature of thermalization, the amount of heat transferred between the system and the bath in any given trial cannot be predicted. Hence the amount of work done by the drive, in any given trial, cannot be predicted. The protocol can be associated with a work distribution P(W), the probability density associated with some trial's costing an amount W of work. In equilibrium thermodynamics, P(W) peaks tightly at the average value $W=\langle W\rangle $. This value suffices to describe the work performed in each trial. In more general, nonequilibrium, thermodynamics, the average does not suffice. Yet thermodynamic state variables related to averages (temperature, free energy, etc) are used in nonequilibrium thermodynamics. Fluctuation relations link these variables to probability distributions over work or heat. We will focus mostly on continuous variables W, which have been used in classical and quantum contexts4 (e.g., [6, 8, 10, 15]).

One such fluctuation relation is Crooks' theorem [2]. Though originally derived in a classical setting, it has been shown to govern quantum processes [312, 14, 15]. Crooks' theorem describes the fluctuations in the work expended on systems subject to a time-changing Hamiltonian H(λt) in the presence of a heat bath. An experimenter can change the external scalar parameter λt during the time interval $t\in [-\tau ,\tau ]$ by performing work. We denote the bath's inverse temperature by $\beta =1/{k}_{{\rm{B}}}T$. The external driving can be performed in either a forward or a reverse direction. The forward process begins at time −τ, when the system occupies the thermal state ${{\rm{e}}}^{-\beta {H}_{-\tau }}/{Z}_{-\tau }$ of the initial Hamiltonian ${H}_{-\tau }\equiv H({\lambda }_{-\tau })$. The reverse process begins at time τ, when the system occupies the thermal state ${{\rm{e}}}^{-\beta {H}_{\tau }}/{Z}_{\tau }$ associated with the final Hamiltonian ${H}_{\tau }\equiv H({\lambda }_{\tau })$ of the forward protocol. (${Z}_{\pm \tau }$ denote normalization factors.)

Suppose that an agent implements the protocol in both directions many times, measuring the work invested in each forward trial and the work extracted from each reverse. Two probability distributions encapsulate these measurements: ${P}_{\mathrm{fwd}}(W)$ denotes the probability that some forward trial will require work W (or the probability per unit work, if ${P}_{\mathrm{fwd}}$ denotes a probability density), and ${P}_{\mathrm{rev}}(-W)$ denotes the probability that some reverse trial will output work W.

If the system's interactions with the bath are Markovian and microscopically reversible [33], and if the initial state is thermal, the work probability distributions satisfy Crooks' theorem [2],

Equation (1)

This ${\rm{\Delta }}F$ denotes the difference $F({{\rm{e}}}^{-\beta {H}_{\tau }}/{Z}_{\tau })-F({{\rm{e}}}^{-\beta {H}_{-\tau }}/{Z}_{-\tau })$ between the Helmholtz free energies of thermal states over the forward process's final and initial Hamiltonians.

Multiplying each side of Crooks' theorem by ${P}_{\mathrm{rev}}(-W){{\rm{e}}}^{-\beta W}$ and integrating over W yields Jarzynski's equality [1],

Equation (2)

wherein ${\langle .\rangle }_{\mathrm{fwd}}$ denotes an expectation value calculated from ${P}_{\mathrm{fwd}}$. Applied to a work distribution P(W) constructed from simulations or experiments, Jarzynski's equality can be used to calculate the equilibrium quantity ${\rm{\Delta }}F$. Combined with Jensen's inequality, $\langle {{\rm{e}}}^{x}\rangle \geqslant {{\rm{e}}}^{\langle x\rangle }$, Jarzynski's equality implies a lower bound on the average work required to complete a trial. This bound, $\langle W\rangle \geqslant {\rm{\Delta }}F$, has been considered a statement of the Second Law of Thermodynamics [34].

The left-hand side (lhs) of Jarzynski's equality has been recognized as the characteristic function, or Fourier transform, of ${P}_{\mathrm{fwd}}(W)$ [6, 15]. If $u={\rm{i}}\beta $ denotes the variable conjugate to W, the characteristic function is

Equation (3)

In terms of the characteristic function, Jarzynski's equality reads

Equation (4)

The reverse process corresponds to the characteristic function ${\chi }_{\mathrm{rev}}(\beta )\equiv {\displaystyle \int }_{-\infty }^{\infty }{\rm{d}}{W}{{P}}_{\mathrm{rev}}(W){{\rm{e}}}^{-\beta W}$, in terms of which ${\chi }_{\mathrm{rev}}(\beta )={{\rm{e}}}^{\beta {\rm{\Delta }}F}$.

2.2. One-shot statistical mechanics

Mean values do not necessarily reflect a system's typical behavior. Consider a system that must output at least some threshold amount of work to trigger another process. One such threshold is the activation energy required to begin a chemical reaction. The system might output below-threshold work usually but far-above-threshold work occasionally. The average work might exceed the threshold, but the second process is usually not triggered.

By spotlighting statistics other than the mean, one-shot information theory extends idealized protocols implemented $n\to \infty $ times to realistic finite-n protocols that might fail. Conventional statistical mechanics describes the optimal rate at which work can be extracted asymptotically. Consider transforming n copies of one equilibrium state into n copies of another quasistatically, in the presence of a temperature-T heat bath. In the asymptotic, or thermodynamic, limit as $n\to \infty $, the average work required per copy approaches the difference ${\rm{\Delta }}F$ between the states' free energies. The free energy depends on the Shannon entropy. In reality, states are transformed finitely many times, and realistic processes have probabilities δ of failing to accomplish their purposes. Finite-n work-consumption rates have been quantified with one-shot entropies [17, 2023]. So have the efficiencies of finite-n data compression, randomness extraction, quantum key distribution, and hypothesis testing [16, 3537].

One one-shot entropy is the order-$\infty $ Rényi entropy ${H}_{\infty }({\mathcal{P}})$, known also as the min-entropy. For any discrete probability distribution ${\mathcal{P}}$ whose greatest element is ${{\mathcal{P}}}^{\mathrm{max}}$,

Equation (5)

(All logarithms in this article are base-e.) We will discuss two popular models in which one-shot entropies are applied to thermodynamics: work-extraction games and thermodynamic resource theories.

2.2.1. Work-extraction game

In the work-extraction game described by Egloff et al [18], a player transforms a system in a state ρ, governed by a Hamiltonian ${H}_{\rho }$, into a state σ governed by ${H}_{\sigma }$ : $(\rho ,{H}_{\rho })\mapsto (\sigma ,{H}_{\sigma })$. For simplicity, we take a quasiclassical model such that states are assumed to commute with their Hamiltonians. The agent has access to a temperature-T heat bath.

The player should choose an optimal strategy to maximize the transformation's work output (or minimize the transformation's work cost). The strategy consists of a sequence of operations of two types: (1) without investing work, the player can couple the system to the bath in any manner modeled by a stochastic matrix that preserves the Gibbs state ${{\rm{e}}}^{-\beta {H}_{\rho }}/{Z}_{\rho }$. (Such thermalization models are discussed in appendix A.) (2) By investing or extracting work, the agent can shift the Hamiltonian's levels.

The primary result in [18] implies an upper bound on the work extractable (up to a probability δ of failure) during the transformation $(\rho ,{H}_{\rho })\mapsto (\sigma ,{H}_{\sigma })$. Egloff et al show that the optimal strategy has a probability $1-\delta $ of outputting at least the work

Equation (6)

in each trial. GT denotes Gibbs-rescaling relative to the temperature T. Gibbs-rescaling facilitates the comparison of the work values of states governed by different Hamiltonians. A state can have the capacity to perform work due to the state's information content (e.g., because the state is pure) and energy contents (e.g., because the state has weight on high energy levels). Gibbs-rescaling recasts the state's work capacity as entirely informational. This recasting enables us to compare the final and initial states, even though different Hamiltonians govern them. M denotes the relative mixedness, a measure of how much more mixed one state is, or how much less information-sourced work capacity a state has. Dissipative processes yield less than the optimal amount ${w}_{\mathrm{best}}^{\delta }$ of work. Hence equation (6) upper-bounds the amount that could be extracted with an arbitrary (possibly suboptimal) strategy.

2.2.2. Thermodynamic resource theories

Resource theories have been used to calculate how efficiently scarce quantities can be distilled and converted into other forms via cheap (or 'free') operations [25]. To an agent able to perform only certain operations for free, each state has some value, or worth. We can quantify this value with resource theories. Thermodynamic resource theories model exchanges of heat amongst systems and baths [20, 2224, 26]. Each resource theory is defined by the inverse temperature β of a heat bath from which the agent can draw Gibbs states for free. More generally, energy-conserving thermal operations can be performed for free. Nonequilibrium states have value because work can be extracted from them.

Horodecki and Oppenheim introduced one-shot tools into thermodynamic resource theories [20]. They focused on quasiclassical resource theories, in which states commute with the Hamiltonians that govern them. Horodecki and Oppenheim calculated the minimum work required to create a state within trace distance epsilon of a target state ρ. They also analyzed the transfer of work from ρ to a battery defined by a two-level Hamiltonian of gap w. The maximum w such that the battery ends within trace distance δ of its excited state was shown to be related to a one-shot entropy of ρ. One-shot information theory has since been applied to catalysis (the facilitation of a transformation by an ancilla) [22, 26], to arbitrary baths such as particle baths [23], and to quantum problems (that involve states that do not commute with the Hamiltonian) [38, 39].

3. Unification of fluctuation theorems and one-shot statistical mechanics

Fluctuation theorems and one-shot statistical mechanics concern properties of work distributions beyond averages. The two frameworks do not compete to describe the same concept in alternative ways. Rather, the frameworks complement each other and can be combined into a general description of small-scale classical and quantum systems. Fluctuation theorems are restricted to systems that satisfy certain physical assumptions and that can undergo forward and reverse protocols. Crooks' theorem [2], for example, relies on the dynamics' Markovianity and microscopic reversibility, and on the system's beginning in a thermal state. The tools of one-shot statistical mechanics (e.g. Rényi entropies, and bounds on work values in every trial of an experiment) can be applied more generally to the statistics produced by any system that consumes work. The formalisms are not incompatible: the tools of one-shot statistical mechanics can be applied to the work distribution of any process governed by Crooks' theorem (see figure 1).

We will substantiate this claim by focusing on the one-shot concept of guaranteed work: an upper bound (up to some error) on the work required to complete some process that applies not just on average, but in every trial. We will define this quantity in contexts governed by Crooks' theorem and will relate the quantity to the one-shot entropy ${H}_{\infty }$. Our results describe all quantum and classical systems whose thermalization satisfies microscopic reversibility and Markovianity and whose work distribution is continuous. (For details about these assumptions' realizations in two common one-shot frameworks, see appendix A.)

3.1. One-shot work quantities in fluctuation contexts

Suppose we consider the behavior of a system evolving under a process that is driven by a single external parameter, and otherwise satisfies the conditions for Crooks' theorem to hold. For clarity in this article, we shall choose examples where the forward process tends to cost work to complete. We will upper-bound the amount of work required to complete a single trial of a process successfully, such that this bound is only exceeded with probability epsilon (as illustrated on the right-hand side (rhs) of figure 2).

Figure 2.

Figure 2. One-shot work quantities ${W}^{\varepsilon }$ and ${w}^{\delta }$. The forward and reverse work distributions ${P}_{\mathrm{fwd}}(W)$ and ${P}_{\mathrm{rev}}(-W)$ intersect at ${\rm{\Delta }}F$, the difference between the free energies of the Gibbs states associated with the forward protocol's initial and final Hamiltonians. The shaded region under the right tail of the ${P}_{\mathrm{fwd}}(W)$ curve has an area $\varepsilon $, which is the probability that a forward process consumes more work than the $\varepsilon $-required work ${W}^{\varepsilon }$. The shaded region under the left tail of the ${P}_{\mathrm{rev}}(-W)$ curve has an area $\delta $, which is the probability that the reverse process outputs less work than the $\delta $-extractable work ${w}^{\delta }$.

Standard image High-resolution image

Definition 1. Each implementation of the forward protocol has a probability $1-\varepsilon $ of requiring no more work than the ${\boldsymbol{\varepsilon }}$-required work ${W}^{\varepsilon }$ that satisfies

Equation (7)

The trial has a probability $\varepsilon \in [0,1]$ of requiring more work than ${W}^{\varepsilon }$.

Similarly, we will lower-bound the amount of work extracted in the reverse process, for all but δ of the trials (illustrated on the lhs of figure 2).

Definition 2. Each implementation of the reverse protocol has a probability $1-\delta $ of outputting at least the $\delta $-extractable work ${w}^{\delta }$ that satisfies

Equation (8)

The trial has a probability $\delta \in [0,1]$ of outputting less work than ${w}^{\delta }$.

The failure probability has two interpretations. Suppose, in the work-investment case, that an agent invests only the amount ${W}^{\varepsilon }$ of work in a forward trial. The external parameter ${\lambda }_{t}$ has a probability epsilon of failing to reach ${\lambda }_{\tau }$. Alternatively, suppose the agent invests all the work required to evolve ${\lambda }_{t}$ to ${\lambda }_{\tau }$. The agent has a probability epsilon of overshooting the 'work budget' ${W}^{\varepsilon }$. The failure probability δ associated with work extraction can be interpreted similarly.

3.2. One-shot Jarzynski equalities

Even if only forward trials have been performed, the reverse process's ${w}^{\delta }$ can be calculated from Crooks' theorem.

Lemma 1. Each reverse trial has a probability $1-\delta $ of outputting at least the amount ${w}^{\delta }$ of work that satisfies

Equation (9)

wherein

Equation (10)

generalizes the characteristic function ${\chi }_{\mathrm{fwd}}(\beta )$.

Proof. Upon multiplying each side of Crooks' theorem (equation (1)) by ${P}_{\mathrm{rev}}(-W){{\rm{e}}}^{-\beta W}$, we integrate from ${w}^{\delta }$ to infinity. The lhs equals ${\chi }_{\mathrm{fwd}}^{\delta }(\beta )$ by definition (equation (10)). The right-hand integral evaluates to $1-\delta $ by definition 2. □

We can calculate ${W}^{\varepsilon }$ from ${P}_{\mathrm{rev}}(-W)$ via Crooks' theorem in the same way:

Lemma 2. Each forward trial has a probability $1-\varepsilon $ of requiring no more work than the ${W}^{\varepsilon }$ that satisfies

Equation (11)

wherein

Equation (12)

generalizes the characteristic function ${\chi }_{\mathrm{rev}}(\beta )$.

These one-shot generalizations extend Jarzynski's equality (equation (3)), rendering it more robust against unlikely (probability less than $\epsilon $) but highly expensive (work cost more than ${W}^{\varepsilon }$) fluctuations in work. The generalizations characterize every quantum or classical process that produces a work distribution governed by Crooks' theorem. An alternative proof of these general lemmata—specialized for quantum systems undergoing unitary evolution and hence producing discrete work distributions—is presented in appendix B.

3.3. Bounding one-shot work quantities

We can use Crooks' theorem, via the above lemmata, to derive bounds on the one-shot required and extractable work. These bounds depend on characteristics of the work distributions:

Theorem 3. The work δ-extractable from each reverse trial satisfies

Equation (13)

for $\delta \in [0,1)$, wherein we have defined

Equation (14)

for continuous work distributions.

Proof. Let ${P}_{\mathrm{fwd}}^{\mathrm{max}}$ denote the greatest value of ${P}_{\mathrm{fwd}}(W)$ : ${P}_{\mathrm{fwd}}^{\mathrm{max}}\geqslant {P}_{\mathrm{fwd}}(W)\ \forall W$. We can upper-bound the integral implicit in the ${\chi }_{\mathrm{fwd}}^{\delta }(\beta )$ of equation (9) in lemma 1:

Equation (15)

Solving for ${w}^{\delta }$ yields inequality (13).□

Analogous statements, which we present without further proof, describe ${W}^{\varepsilon }$:

Theorem 4. The work epsilon-required during each forward trial is bounded by

Equation (16)

for failure probability $\varepsilon \in [0,1)$.

These theorems demonstrate our central claim: that one-shot statistical mechanics can be applied to settings governed by fluctuation theorems. We have related the one-shot work quantities ${W}^{\epsilon }$ and ${w}^{\delta }$ to the one-shot entropy ${H}_{\infty }^{\beta }$.

Operationally when one handles data arising from a simulation or experiment, one does not directly observe a work distribution. Rather, one obtains a list of values that could be divided into bins of finite range (i.e., presented as a histogram) to approximate the true probability distribution. As want to consider a theoretical bound that reflects the behavior of the underlying thermal process independently of the choice of binning, we consider the entropy expressed in terms of the probability density $P(W)$. For theorems 3 and 4 to be applicable, it must be the case that the experimental data can be fitted to a theoretical model—a weak assumption for any scientific experiment.

There are further considerations that must be taken into account when considering the entropy of a continuous distribution in contrast with the entropy of a histogram taken from that distribution. (Throughout the following paragraph, we refer with ${P}_{\mathrm{fwd}}$ and ${P}_{\mathrm{rev}}$ jointly as P.) For a histogram taken of $P(W)$, in the limit of small enough bin width dW, the probability of each bin is well approximated by the product $P(W){\rm{d}}W$ as evaluated at a point W inside that bin. However, in the limit of decreasing bin size, the probability of being in any particular bin will tend to zero, and the order-$\infty $ entropy, as previously defined, diverges: ${H}_{\infty }(P)={\mathrm{lim}}_{{\rm{d}}W\to 0}[-\mathrm{log}({P}^{\mathrm{max}}{\rm{d}}W)]\to \infty $. An unmodified ${H}_{\infty }$ is not a useful quantity in this limit. By this measure, there is an infinite amount of information in any continuous distribution, and hence it can not be used to quantify the amount by which some continuous distributions are more entropic than others. To circumvent this problem, we employ a type of renormalization. ${H}_{\infty }$ can be split into a finite part that varies with the distribution under consideration, and an infinite part that does not: ${H}_{\infty }=-\mathrm{log}({P}^{\mathrm{max}}{\rm{d}}W)=-\mathrm{log}({P}^{\mathrm{max}}{k}^{-1})-\mathrm{log}(k{\rm{d}}W)$, where k is an arbitrary factor with units of inverse energy chosen such that the argument of each logarithm is dimensionless. When one considers the difference in entropy between distributions, the latter term always cancels out. As such the quantity ${H}_{\infty }^{k}=-\mathrm{log}({P}^{\mathrm{max}}{k}^{-1})$, by omitting the latter term, defines the amount by which the continuous order-$\infty $ entropy differs from a reference distribution of uniform probability density k over width ${k}^{-1}$. Our choice here of

Equation (17)

amounts to comparing the entropy of $P(W)$ with that of a uniform distribution over range ${\beta }^{-1}$ with probability density β. We remark this technique is not unique to the order-$\infty $ Rényi entropy, but is also necessary if one wishes to arrive at the differential entropy as a limiting case of the discrete Shannon entropy5 .

Although any quantity with units of inverse energy could be used as k, β is a natural choice: $-\mathrm{log}\beta $ appears everywhere $\mathrm{log}{P}^{\mathrm{max}}$ appears in our calculations; any alternative choice of normalization k would result in the need for an additional correction term of $\mathrm{log}(k/\beta )$ in inequalities (13) and (16). (This extra term can be interpreted as how the entropy of the new reference distribution compares to that of the uniform distribution of range ${\beta }^{-1}$ and probability β).

The bounds in theorems 3 and 4 shed light on the physical contributions to ${W}^{\varepsilon }$. To a first approximation, the epsilon-required work equals ${\rm{\Delta }}F$, the work needed to complete the process quasistatically. The negative contribution from $\mathrm{log}(1-\varepsilon )$ accounts for the tradeoff between work and failure probability: the agent can lower the bound on the required work by accepting a higher failure probability epsilon.

Outside the quasistatic limit, the system leaves equilibrium, and W fluctuates from trial to trial. This fluctuation necessitates a protocol-specific correction ${H}_{\infty }^{\beta }({P}_{\mathrm{rev}})$. In cryptography applications, ${H}_{\infty }(P)$ quantifies the uniform randomness (and hence resources usable to ensure privacy) extractable from a distribution P [35]. A distribution might have more randomness, but ${H}_{\infty }$ quantifies the minimum value (being the lowest-valued Rényi entropy). In our result, ${H}_{\infty }^{\beta }({P}_{\mathrm{rev}})$ can be thought of as the uniform randomness intrinsic to the work distribution. ${H}_{\infty }^{\beta }({P}_{\mathrm{rev}})$ thus quantifies fluctuations in work, caused by irreversibility, that raise the lower bound on ${W}^{\varepsilon }$.

Whereas some one-shot results (e.g. [18, 22]) involve Rényi entropies of states, ${H}_{\infty }^{\beta }({P}_{\mathrm{fwd}})$ is an entropy of a work distribution. ${H}_{\infty }^{\beta }({P}_{\mathrm{fwd}})$ captures fluctuation information from all sources that might affect the work distribution. These sources include the initial state and the manner in which the protocol is executed (e.g., quickly or quasistatically). In contrast, an entropy evaluated on states encodes only some of this information. By containing an entropy of a work distribution, rather than an entropy of a state, the above results can be applied in a more general setting: they can be related to the output of any procedure, as opposed to the worth of a particular input state under a fixed procedure (usually taken to be the optimal one [18]). Consequently, our results remain independent of the work-extraction model used. Theorems 3 and 4 govern (semi)classical and quantum systems, so long the protocol produces a work distribution consistent with Crooks' theorem.

As theoretical limits, these bounds will always hold true. However, to be operationally useful for bounding ${w}^{\delta }$ (or ${W}^{\varepsilon }$) they must be applied to models where ${P}^{\mathrm{max}}$ can be upper-bounded following enough trials of the experiment. This is possible if the distribution $P(W)$ is smooth such that beyond a certain narrowness of bin width, any further division of each bin results in approximately equal probability densities.

Applying information about the protocol executed in one direction, we have used Crooks' theorem to infer about the opposite direction. This information tightens the bound on ${W}^{\varepsilon }$ when6 ${H}_{\infty }^{\beta }({P}_{\mathrm{rev}})\ \geqslant 0$, i.e., when

Equation (18)

Systems described poorly by conventional statistical mechanics tend to satisfy inequality (18). Such systems' work distributions have significant spreads relative to the characteristic energy scale ${\beta }^{-1}$, such that the distribution lacks tall peaks. We will present DNA-hairpin experiments as an example.

3.4. Crooks' theorem in specific one-shot work-extraction models

3.4.1. Tightening a bound in the work-extraction game

Egloff et al calculate the optimal amount ${w}_{\mathrm{best}}^{\delta }$ of work δ-extractable from a state via the most efficient strategy [18]. Their calculation implies an upper bound on the work extractable via arbitrary strategies. By applying theorems 3 and 4 to the Egloff et al framework, we can tighten the bound for protocols that satisfy the assumptions used to derive Crooks' theorem and for which ${P}^{\mathrm{max}}\lt \beta $.

In the Egloff et al setting, we can consider a forward protocol that consists of two stages: first, the thermal state ${\gamma }_{-\tau }\equiv {{\rm{e}}}^{-\beta {H}_{-\tau }}/{Z}_{-\tau }$ transforms into some nonequilibrium state σ as the externally driven Hamiltonian changes. The system either can remain thermally isolated or can thermalize, provided that the thermalization satisfies detailed balance (see appendix A). The agent can choose one of many possible strategies, e.g., by alternating Hamiltonian changes and thermalizations or by thermally isolating the system throughout the first stage. Second, σ thermalizes to ${\gamma }_{\tau }\equiv {{\rm{e}}}^{-\beta {H}_{\tau }}/{Z}_{\tau }.$ This thermalization neither costs nor produces work. The entire protocol is encapsulated in $({\gamma }_{-\tau },{H}_{-\tau })\mapsto (\sigma ,{H}_{\tau })\mapsto ({\gamma }_{\tau },{H}_{\tau }).$ At the start of the reverse protocol, the system begins in the themal state of ${H}_{\tau }$. Under the time-reversed process (in which the drive is reversed, such that the Hamiltonian retraces its path through configuration space), the system is transformed into some nonequilibrium state $\sigma ^{\prime} $. Then the state thermalizes to the thermal state of ${H}_{\tau }$.

For protocols which fall into the above category, knowing about one protocol, we can bound the work extractable from, or the work cost of, the opposite protocol:

Corollary 5. The work δ-extractable from each implementation of the reverse protocol, in terms of the forward protocol's ${H}_{\infty }^{\beta }({P}_{\mathrm{fwd}})$, satisfies

Equation (19)

for $\delta \in [0,1)$.

Corollary 6. The work epsilon-required during each implementation of the forward protocol, in terms of the reverse protocol's ${H}_{\infty }^{\beta }({P}_{\mathrm{rev}})$, satisfies

Equation (20)

for $\varepsilon \in [0,1)$.

The proofs appear in appendix C.2. Each corollary consists of a bound derived from [18] and an ${H}_{\infty }^{\beta }$ correction attributable to Crooks' theorem (introduced via theorems 3 and 4). The ${H}_{\infty }^{\beta }$ quantifies the protocol's suboptimality, caused by dissipation due to the protocol's speed [29]. Positive values of ${H}_{\infty }^{\beta }$ tighten the bounds. Hence incorporating information about the forward (reverse) process into the reverse (forward) bound improves the bound when the process deviates sufficiently from the quasistatic ideal.

3.4.2. Modeling fluctuation-relation problems with resource theories

One can formulate scenarios governed by Crooks' theorem in thermodynamic resource theories. Such scenarios involve a sequence of thermal operations that obey detailed balance. Such operations form a strict subset of the set of all thermal operations (see appendix A). Hence Crooks' theorem does not govern all thermal operations. The application of Crooks' theorem requires the introduction of work and time into the resource theories. Work can be defined in terms of a battery [21, 23]; and time, in terms of a clock [20, 26]. Resource-theory results can be used to derive the work cost of a sequence of transformations that a system governed by Crooks' theorem can follow (see appendix D.2).

We leave for future work the derivation, from resource-theory results, of testable predictions about Crooks' problem. Considerable mathematical tools, such as monotones [20, 22, 41] and catalysts [22, 41], have been developed within the resource-theory framework. Having demonstrated the applicability of Crooks' theorem to resource theories, we look to use Crooks' theorem to bridge these mathematical tools to experiments.

4. Examples of one-shot work quantities in fluctuation contexts

4.1. Landauer bit reset and Szilárd work extraction

A simple example involves the heat-exchanging portion of Landauer bit reset and its reverse, Szilárd work extraction. The set-up consists of a two-level system ${\mathcal{S}}$ governed by the Hamiltonian $H({\lambda }_{t})=E(t)| E\rangle \langle E| $. Suppose ${\mathcal{S}}$ exchanges energy with a heat bath whose temperature is $T=1/\beta $. At time $t=-\tau $, $E(t)=0$, and ${\mathcal{S}}$ is in thermal equilibrium, i.e., in the maximally mixed state $\rho (-\tau )=\frac{1}{2}(| 0\rangle \langle 0| +| E\rangle \langle E| )$. If ρ represents the location of a particle in a two-compartment box, the agent has no idea which compartment the particle occupies.

Transforming $\rho (-\tau )$ into a pure state—forcing the particle into one half of the box—is called bit reset, or Landauer erasure. Resetting the bit quasistatically costs, on average,

Equation (21)

If the bit is reset in a finite time, $\langle W\rangle $ might exceed ${k}_{{\rm{B}}}T\mathrm{log}2$ [29]. Such a protocol has appeared in fluctuation contexts before [18, 19, 29] and has been realized experimentally (e.g., in a test of Jarzynski's equality by Brownian motion [42, 43]).

We define failure under the assumption that every started trial is completed: a forward trial fails if it consumes more work than the budgeted work ${W}^{\varepsilon }$. (Alternatively, one could define 'failure' under the assumption that too-costly trials would not be completed. A trial would be said to fail if the budgeted work were consumed but the bit had not been reset.)

Reversal of the bit reset amounts to Szilárd work extraction. Leó Szilárd envisioned the conversion of information into work in 1929 [27]. ${\mathcal{S}}$ begins thermally isolated, in the pure state $| 0\rangle $, and governed by $H({\lambda }_{t})=0$. During the first leg of Szilárd work extraction, the agent raises $E(t)$ to infinity. The raising costs no work because ${\mathcal{S}}$ occupies the lower level. In the second leg, ${\mathcal{S}}$ is coupled to the bath, then performs positive work as $E(t)$ decreases to zero. To be strictly the reverse of the bit reset presented above, we consider only the second leg as an implementation of the reverse protocol. The initial state, although it is a pure state, is thermal since only the lower energy level has non-zero occupation probability according to the Gibbs distribution. As such this work-extraction step can be linked to the bit reset via Crooks' theorem; their Hamiltonians are the time-reverse of each other, and each protocol starts in the appropriate thermal state.

These two processes, forming a forward-and-reverse pair governed Crooks' theorem, provide a natural example with which to test our one-shot results. We performed a Monte Carlo simulation of Landauer erasure and Szilárd work extraction. Details appear in appendix E. The simulation produced results consistent with theorem 3, as shown in figure 3.

Figure 3.

Figure 3. One-shot work versus failure probability for numerical simulations of Landauer erasure and Szilárd work extraction. One-shot work quantities were calculated in a setting governed by fluctuation theorems. We simulated 10 000 bit-reset trials. The predicted outputs for work-extraction trials (dark blue), inferred via Crooks' theorem, are compared to the work outputs from 10 000 directly simulated work-extraction trials (light blue) and to the one-shot generalization of Jarzynski's equality presented in this article (red dashed). The horizontal dotted gray line indicates the free-energy difference of the process (${k}_{{\rm{B}}}T\mathrm{ln}2$). The dark blue curve coinciding with the light blue curve supports the applicability of Crooks' theorem to this idealized setting. The red dashed curve bounds the blue curves from above, showing that the one-shot generalization of Jarzynski's equality (theorem 3) governs this scenario.

Standard image High-resolution image

4.2. Experiment: DNA-hairpin unzipping

When single molecules are manipulated experimentally, 'fluctuations are relevant and deviations from the average behavior are observable' [32]. Some such experiments are known to obey fluctuation relations. We show that data from DNA-hairpin experiments used previously to test Crooks' theorem [3032] agree with the one-shot results in section 3. The agreement suggests that one-shot statistical mechanics might shed light on similar single-molecule experiments and applications. Alternatively, such experiments might be used to test one-shot statistical mechanics.

A DNA hairpin is a short double helix of about 21 base pairs [3032]. The helix's two strands are called legs. One end of one leg is attached to one end of the other leg, forming a shape like a hairpin's. The other end of each leg ends in a handle formed from DNA. To each handle is attached a polystyrene or silica bead. One bead remains anchored on a micropipette. The other is caught in an optical trap that exerts a force. During the forward protocol, these optical tweezers pull the legs apart, unzipping the DNA into one strand. The more quickly the hairpin is split (the greater the pulling speed), the more work is dissipated. During the reverse protocol, the helix is rezipped.

We combined work distributions provided by Ritort and Alemany [30, 32, 44] with theorem 4 (illustrated in figure 4). Each graph shows the work ${W}^{\varepsilon }$ that a given unzipping trial is epsilon-guaranteed to require, plotted against the failure probability epsilon. We converted the data (a list of work values) into a probability distribution by forming a histogram with 50 equally sized bins that span the range of work costs. Energy is given in units of ${k}_{{\rm{B}}}T$, such that $\beta =1$. For pulling speeds of 15, 60, and $180{\mathrm{nms}}^{-1}$, the binning resulted in distributions whose ${P}^{\mathrm{max}}$ = 0.465 β, 0.277 β, and 0.162 β respectively. In all three cases ${P}^{\mathrm{max}}\lt \beta $, such that the ${H}_{\mathrm{max}}$ term in theorem 4 tightens the bound.

Figure 4.

Figure 4. DNA-hairpin unzipping at three different speeds. Using work values from [30, 32, 44], we have plotted the one-shot $\varepsilon $-required work ${W}^{\varepsilon }$ against the failure probability $\varepsilon $. The theoretical lower bound (the red dashed line) derived in this article remains close to the experimentally measured value (light blue) and to the value inferred from the reverse protocol's work distribution via Crooks' theorem (dark blue). The gray dotted line indicates the free-energy difference in each process. The red-blue separation maximizes at about $5{k}_{{\rm{B}}}T$, whereas each trial requires about 300–400 ${k}_{{\rm{B}}}T$, a relative difference of order 1%.

Standard image High-resolution image

Whereas a work investment of about 300–400 ${k}_{{\rm{B}}}$ T is required to complete the procedure, the jittering between the light blue curve (the directly measured value of ${W}^{\varepsilon }$) and the dark blue curve (the value of ${W}^{\varepsilon }$ predicted from the reverse work distribution ${P}_{\mathrm{rev}}$ via lemma 2) is of a scale less than $5\;{k}_{{\rm{B}}}{\rm{T}}$. Hence Crooks' theorem interrelates the work probability distributions up to a discrepancy of around 1%, as argued in [30, 32]. The red curve remains below the dark blue and light blue curves, confirming that the one-shot lower bound in theorem 4 governs this experimental setting. The red curve remains close to—always within about $5{k}_{{\rm{B}}}T$ of (around $1\%$ of ${\rm{\Delta }}F$)—the light blue curve that represents directly simulated work investment. This agreement between theory and experiment suggests that the application of one-shot results may shed light on similar single-molecule experiments and on applications such as molecular motors, thermal ratchets, and nanoscale engines (e.g., [4547]).

5. Conclusions and outlook

Crooks' theorem relates probability distributions between a process and its reverse. We can manipulate these distributions using tools from one-shot statistical mechanics. As demonstrated in this article, combining the toolkits leads to bounds on the work likely to be required (or produced) in classical and quantum process. Information about fluctuations tighten the bounds. Fluctuation relations and one-shot statistical mechanics are not competitors, but are mutually compatible. Combining the approaches yields statements about quite general thermal systems. The combination illustrates a possible bridge from one-shot theory to experimental settings through fluctuation theorems.

One experimental application is the cost of bit reset in modern microprocessors. As miniaturization reduces the size of transistors further into the nanoscale (e.g., [48]), limiting only the average heat dissipation does not ensure that devices work. Of increasing importance is a guarantee that no single bit reset dissipates any amount of heat (costs any amount of work) above some threshold that could damage the nanoscale device. The relevant fluctuations can be studied with Crooks' theorem and related, via the results in this article, to the one-shot maximum work cost.

The results in this article might be useful also when the work available to be spent on each trial is limited, or if the work extracted from each trial must exceed a certain threshold, except with bounded failure probability. Such quantities might have uses also in a paranoia setting. An agent might have a known amount of work to invest, and one might need to ensure that the agent can not erase some information, except with some small probability. Similarly, one-shot work might be applicable in a verification scenario. Suppose an agent claims to be able to provide some amount of work. To test the claim, one can request a transformation that costs more than this amount of work, except in a bounded number of cases.

Future research might reveal further links between one-shot statistical mechanics and fluctuation theorems. Here through the analysis of fluctuation theorems, we identified a relationship between one-shot work quantities and the order-$\infty $ Rényi entropy. By considering the Rényi divergence between the work distributions of a process and its reverse, one might find a relationship with one-shot dissipated work, following from the observation that the average dissipated work is proportional to the Kullback–Leibler divergence (average relative entropy) between the forwards and reverse work distributions [49]. Considering divergences between distributions avoids the issue of infinitely high entropy when the work distribution contain sharp peaks that have a finite ratio of height to each corresponding peak in the reverse distribution. As such, this approach could provide general and robust tools for bridging one-shot statistical mechanics to fluctuation settings that hold for discrete work distributions in addition to the continuous distributions focused on in this article.

Note added: between the first presentation of these results and the current version of this article, related results have appeared in [50, 51].

Acknowledgments

The authors are grateful for conversations with Anna Alemany, Janet Anders, Cormac Browne, Tanapat Deesuwan, Alex Lucas, Jonathan Oppenheim, Felix Pollock, and Ibon Santiago. This work was supported by a Virginia Gilloon Fellowship, an IQIM Fellowship, support from NSF grant PHY-0803371, the FQXi Large Grant for 'Time and the Structure of Quantum Theory,' the EPSRC, the John Templeton Foundation, the Leverhulme Trust, the Oxford Martin School, the National Research Foundation (Singapore), and the Ministry of Education (Singapore). The Institute for Quantum Information and Matter (IQIM) is an NSF Physics Frontiers Center with support from the Gordon and Betty Moore Foundation. VV and OD acknowledge funding from the EU Collaborative Project TherMiQ (Grant Agreement 618074). NYH was visiting the University of Oxford under the auspices of Jonathan Barrett and the Department of Atomic and Laser Physics while much of this paper was developed.

Appendix A. Relationships among thermalization models

Consider a system ${\mathcal{S}}$ governed by a discrete N-level Hamiltonian H. Suppose that ${\mathcal{S}}$ interacts with a heat bath whose inverse temperature is β. An N-dimensional probability vector $\vec{s}$ represents the system's state. Whole or partial thermalization of ${\mathcal{S}}$ can be modeled as a sequence of discrete steps, each represented by a stochastic matrix. Different possible properties of such matrices characterize different models of heat exchanges. We address the properties of Gibbs-preservation, detailed balance, and thermalization. By $\vec{g}$, we denote the probability vector that represents the Gibbs state associated with H and β:

A matrix M is Gibbs-preserving relative to H and β if M maps the corresponding Gibbs state to itself:

Equation (A1)

Gibbs preservation constrains the unit-eigenvalue eigenspace of M. The set ${\mathcal{G}}$ of Gibbs-preserving matrices on quasiclassical states is equivalent to the set of resource-theory thermal operations on quasiclassical states [20] and to the set of thermal interactions in the game [18].

A strict subset of ${\mathcal{G}}$ is the set ${\mathcal{D}}$ of detailed-balanced matrices: ${\mathcal{D}}\subset {\mathcal{G}}$. Let A and B denote microstates associated with the energies EA and EB. M encodes the probabilities that ${\mathcal{S}}$ transitions from A to B, and vice versa, during one heat-exchange step. If these probabilities satisfy

Equation (A2)

M obeys detailed balance [2].

If the steps in an extended heat exchange obey detailed balance, the extended heat exchange obeys microscopic reversibility [33]. From the assumption that heat exchanges are microscopically reversible, Crooks derives his theorem [2]. Hence if the heat exchanges in a process obey detailed balance (and the other assumptions used to derive Crooks' theorem, such as initialization to a thermal state), the process obeys Crooks' theorem.

Crooks defines microscopic reversibility as follows while deriving his theorem [2]. Let $P(x(t)| {\lambda }_{t})$ denote the probability that, if the external parameter varies as ${\lambda }_{t}$ during some forward trial, the state of the classical system ${\mathcal{S}}$ follows the phase-space trajectory $x(t)$. The 'corresponding time reversed path' is denoted by $(\bar{\lambda }(-t),\bar{x}(-t))$. Let the functional $Q[x(t),{\lambda }_{t}]$ denote the heat that ${\mathcal{S}}$ ejects if ${\lambda }_{t}$ and $x(t)$ characterize the forward trial. The heat exchange obeys microscopic reversibility if

Equation (A3)

Another strict subset of Gibbs-preserving matrices is the set ${\mathcal{T}}$ of thermalizing matrices: ${\mathcal{T}}\subset {\mathcal{G}}$. We call a matrix M thermalizing if it evolves every state $\vec{s}$ of ${\mathcal{S}}$ toward the Gibbs state associated with H and β:

Equation (A4)

Equation (A4) encapsulates intuitions about what 'thermalization' means. Some matrices that model thermal interactions in the game and in the thermodynamic resource theories violate equation (A4), as do some thermal interactions governed by Crooks' theorem. ${\mathcal{T}}$ overlaps with ${\mathcal{D}}$.

The properties we have introduced—Gibbs preservation, detailed balance, and thermalization—imply relationships among Crooks' theorem, theorems about the game, and resource-theory theorems. The game, as well as the thermodynamic resource theories, model the heat exchanges in some processes governed by Crooks' theorem. Crooks' theorem does not necessarily govern all heat exchanges possible in the game or in the resource theories.

A.1. Proof of Venn diagram

Let us justify our modeling of processes governed by Crooks' theorem with the work-extraction game in [18] and with resource theories. Different frameworks (Crooks' theorem, the game, and the resource theories) model interactions with heat baths differently. One step in an interaction can be represented by a stochastic matrix that has at least one of three properties: Gibbs-preservation (${\mathcal{G}}$), detailed balance (${\mathcal{D}}$), and thermalization (${\mathcal{T}}$). The relationships among these matrices are summarized in figure A1 and in the following statements:

  • (i)  
    ${\mathcal{T}}\subset {\mathcal{G}}$: all thermalizing matrices are Gibbs-preserving (lemma 7), but not vice versa (lemma 8).
  • (ii)  
    ${\mathcal{D}}\subset {\mathcal{G}}$: all detailed-balanced matrices are Gibbs-preserving (lemma 9), but not vice versa (lemma 10).
  • (iii)  
    ${\mathcal{D}}\ne {\mathcal{T}}$, ${\mathcal{D}}\;\not\hspace{-2pt}{\subset }\;{\mathcal{T}}$, and ${\mathcal{T}}\;\not\hspace{-2pt}{\subset }\;{\mathcal{D}}$: obeying detailed balance is not equivalent to being thermalizing, and neither category is a subset of the other (lemma 11).
  • (iv)  
    ${\mathcal{D}}\cap {\mathcal{T}}\ne \varnothing $: some matrices are detailed-balanced and themalizing (lemma 12).

Figure A1.

Figure A1. Venn diagram illustrating the relationships among properties of stochastic matrices that model thermal interactions. Dots represent example models.

Standard image High-resolution image

While proving these claims, we justify the inclusion of two example matrices, the partial swap and quasicycles, in figure A1.

The proofs contain the following notation: ${\mathcal{S}}$ denotes a quasiclassical system that evolves under a Hamiltonian H and that exchanges heat with a bath whose inverse temperature is β. By $\vec{s}=({s}_{1},{s}_{2},\ldots ,{s}_{d})$, we denote the state of ${\mathcal{S}}$. The vector's elements are the diagonal elements of a density matrix relative to the eigenbasis of H. The Gibbs state relative to H and to β is denoted by $\vec{g}$.

To prove some of the foregoing claims, we characterize thermalizing matrices with the Perron–Frobenius theorem [52]. The theorem governs irreducible aperiodic non-negative matrices M7 . Consider the eigenvalue λ of M that has the greatest absolute value. According to the Perron–Frobenius Theorem, λ is the only positive real eigenvalue of M, and λ is associated with the only non-negative eigenvector ${\vec{v}}_{\lambda }$ of M. Suppose that M is stochastic, such that $\lambda =1$. By the spectral decomposition theorem, ${\mathrm{lim}}_{n\to \infty }{M}^{n}\vec{s}={\vec{v}}_{\lambda }$. If ${\vec{v}}_{\lambda }=\vec{g}$, the matrix is thermalizing.

Lemma 7. All thermalizing matrices are Gibbs-preserving.

Proof. Let M denote a thermalizing matrix associated with the same Hamiltonian and β as $\vec{g}$. For all states $\vec{s}$ of ${\mathcal{S}}$,

Equation (A5)

To prove the lemma by contradiction, we suppose that M does not map $\vec{g}$ to itself: $\vec{g}\;\not\hspace{-2pt}{\mapsto }\;\vec{g}$.

Premultiplying equation (A5) by M generates

Equation (A6)

This equation contradicts

Equation (A7)

By the contrapositive, all thermalizing matrices are Gibbs-preserving. □

Lemma 8. Not all Gibbs-preserving matrices are thermalizing.

Proof. In general, this will be the case for matrices which have the Gibbs state as an eigenvector but do not otherwise satisfy the conditions of irreducibility or aperiodicity required for the Perron–Frobenius theorem to govern their behavior. To prove the lemma by example, we construct one Gibbs-preserving matrix that is not thermalizing. Consider a block-diagonal stochastic $N\times N$ matrix M. (Being block-diagonal, M is reducible.) Let M decompose into two submatrices: $M={M}_{1}\oplus {M}_{2}$. Let M1 be defined on the first n1 energy levels, and let M2 be defined on the remaining n2 energy levels.

Denote by $\vec{{g}_{1}}$ the Gibbs state associated with the first n1 energies (and the partition function Z1), and by g2 the Gibbs state associated with the final n2 energies (and the partition function Z2). Suppose that

Equation (A8)

are normalized probability eigenvectors of M, each associated with the unit eigenvalue. Every vector of the form

Equation (A9)

is also a normalized probability eigenvector of M associated with the unit eigenvalue.

The possible forms of ${\vec{\nu }}_{\alpha }$ form a family. One member of the family is the Gibbs state $\vec{g}$, which corresponds to $\alpha ={Z}_{1}/Z$ (wherein Z denotes the total partition function). Hence $\vec{g}\in {\{{\vec{\nu }}_{\alpha }\}}_{\alpha \in [0,1]}$ is an eigenvector of M, and M is Gibbs-preserving. However, $\vec{g}$ is not the only eigenvector associated with the unit eigenvalue. M does not evolve every initial state toward $\vec{g}$. In general, ${\mathrm{lim}}_{n\to \infty }{M}^{n}\vec{s}={\nu }_{\alpha }$, wherein α is the total occupation probability of the first n1 energy levels of $\vec{s}$. As some states $\vec{s}$ correspond to $\alpha \ne {Z}_{1}/Z$ and to ${\nu }_{\alpha }\ne g$, M does not map every initial state to the Gibbs state. Hence M is not thermalizing. Our claim has been proved by example. □

Together, lemmas 7 and 8 imply the strict relation ${\mathcal{T}}\subset {\mathcal{G}}$.

Lemma 9. All matrices that obey detailed balance relative to the Hamiltonian H and the inverse temperature β preserve the Gibbs state $\vec{g}$ associated with H and β.

Proof. We will write the forms of the elements in an arbitrary detailed-balanced stochastic $N\times N$ matrix M. By performing matrix multiplication explicitly, we show that $M\vec{g}=\vec{g}$.

Let Mij denote the element in the $i\mathrm{th}$ row and $j\mathrm{th}$ column of M. This element equals the probability that, upon beginning in the $j\mathrm{th}$ energy level, a system ${\mathcal{S}}$ transitions to the $i\mathrm{th}$ level. Let gi denote the thermal population of level i (the $i\mathrm{th}$ element of $\vec{g}$).

Detailed balance and stochasticity constrain the relationships among the Mij. By the definition of detailed balance (equation (A2)), the elements in the lower left-hand triangle of M are related to the elements in the upper right-hand triangle by

Equation (A10)

The matrix has the form

Equation (A11)

Because M is stochastic, the elements in each column sum to one. This normalization condition fixes each diagonal element Mii as a function of the other Mij that occupy the same column:

Equation (A12)

Using index notation, we ascertain how M transforms the Gibbs state $\vec{g}$:

Equation (A13)

The second line follows from the substitution of equation (A12) for Mii. The third line follows from the substitution of equation (A10) into the elements of first sum.

We have shown that $\vec{g}$ is an eigenvector of M that corresponds to the unit eigenvalue. An $N\times N$ matrix M that obeys detailed balance relative to H and β preserves the Gibbs state associated with H and β. □

Lemma 10. Not every Gibbs-preserving matrix for some Hamiltonian H and inverse temperature β satisfies detailed balance for H and β.

Proof. Gibbs-preservation only places a restriction on one eigenvalue of a matrix, and so there remains enough freedom to choose a matrix exhibiting this property, but not detailed balance. An example of such is a quasicycle, described in [20]. A quasicycle is a process that has a probability $P(i\mapsto (i+1)\mathrm{mod}N)\equiv {p}_{i}$ of evolving a system ${\mathcal{S}}$ that occupies energy eigenstate i to eigenstate $i+1$ and has a probability $1-{p}_{i}$ of keeping ${\mathcal{S}}$ in state i. All ${p}_{i}\gt 0$, and for at least one value of i, ${p}_{i}\lt 1$. The directed graph of a quasicycle forms a ring in which at least one node also has a loop to itself, corresponding to a value of $(1-{p}_{i})\gt 0$. An example appears in figure A2 . The probability that i evolves to j forms element Mij of matrix M. The matrix fails to satisfy detailed balance if ${\mathcal{S}}$ has more than three energy eigenstates, because $P(i\mapsto (i+1)\mathrm{mod}N)$ is finite, though $P((i+1)\mathrm{mod}N\mapsto i)=0$.

We will show that, if the pi assume certain values, the Gibbs state $\vec{g}$ is an eigenvector of M. Let level i = 1 correspond to the lowest energy eigenvalue. Solutions of the form

Equation (A14)

wherein $p\in (0,1)$ denotes a free parameter in the range $(0,1)$, are Gibbs-preserving. Roughly, the greater the value of p, the more quickly the quasicycle is traversed.

To verify that equation (A14) describes a Gibbs-preserving matrix, we express the matrix multiplication $M\vec{g}$ in index form:

Equation (A15)

Equation (A16)

Upon substituting in from equation (A14), we can simplify the equations to

Equation (A17)

Equation (A18)

Hence $M\vec{g}=\vec{g}$, so M preserves Gibbs states. □

Together, lemmas 9 and 10 imply the strict relation ${\mathcal{D}}\subset {\mathcal{G}}$.

Figure A2.

Figure A2. Directed graph that illustrates a four-level quasicycle: the associated matrix fails to satisfy detailed balance, but cunning choice of the ${p}_{i}$ ensures that the matrix is thermalizing.

Standard image High-resolution image

Lemma 11. Obeying detailed balance is not equivalent to thermalizing: ${\mathcal{D}}\ne {\mathcal{T}}$, nor is one category a subset of the other.

Proof. We will show that the quasicycle matrix M described in the proof of lemma 10—a matrix that does not obey detailed balance—is thermalizing. Because M is stochastic by construction, its greatest eigenvalue equals one.

In addition to being stochastic, M is irreducible, aperiodic8 and non-negative. By the Perron–Frobenius Theorem, the greatest eigenvalue λ of M corresponds to the only non-negative eigenvector ${\vec{v}}_{\lambda }$ of M. This $\lambda =1$, because M is stochastic. As shown in the proof of lemma 10, ${\vec{v}}_{\lambda }=\vec{g}$. As explained below the proof of lemma 7, ${\mathrm{lim}}_{n\to \infty }{M}^{n}$ maps every vector $\vec{s}$ to $\vec{g}$: the matrices that represent quasicycles thermalize. M does not obey detailed balance, as discussed in the proof of lemma 9.

A simple example of a matrix that obeys detailed balance, but is not thermalizing, is the identity matrix. Less trivially, it is possible in general to engineer a block-diagonal matrix of the form given in lemma 8, and if each block obeys detailed balance, the matrix as a whole will also obey detailed balance (noting that $P(A\mapsto B)=P(B\mapsto A)=0$ trivially satisfies detailed balance). Such a matrix will not be thermalizing. Hence we see that thermalizing is not equivalent to obeying detailed balance: ${\mathcal{D}}\ne {\mathcal{T}}$, and neither category is a subset of the other. □

Lemma 12. Some matrices are detailed-balanced and thermalizing: ${\mathcal{D}}\cap {\mathcal{T}}\ne \varnothing $.

Proof. We can prove this lemma by example. After reviewing the form of the partial-swap matrix M, we show that M thermalizes, then show that M obeys detailed balance.

A partial-swap operation has some probability p of replacing the operated-on state $\vec{s}$ with a thermal state and a probability $1-p$ of preserving $\vec{s}$. If $\vec{s}$ denotes the state of an N-level system,

Equation (A19)

wherein ${{\mathbb{1}}}_{N}$ denotes the $N\times N$ identity and every column of the matrix G is the thermal state $\vec{g}$.

Let us prove that M thermalizes. M is stochastic, as it is the probabilistic combination of ${{\mathbb{1}}}_{N}$ and G, which are stochastic. If N is finite, G is positive; so when $p\gt 0$, M is positive. Positivity implies irreducibility and aperiodicity. Hence the Perron–Frobenius Theorem9 implies that M has just one non-negative eigenvector ${\vec{v}}_{\lambda }$ and that this eigenvector corresponds to $\lambda =1$. Direct multiplication shows $M\vec{g}=\vec{g}$. Thus, $\vec{g}={\vec{v}}_{\lambda }$ is the only non-negative eigenvector of M and corresponds to the largest eigenvalue. By the argument above lemma 7, M thermalizes.

To show that M obeys detailed balance, we compare the matrix elements that represent the probabilities of transitions between states i and j:

Equation (A20)

wherein ${\delta }_{{ij}}$ denotes the Kronecker delta. This equation recapitulates the definition of detailed balance (equation (A2)). Hence matrices—such as the partial swap—can obey detailed balance while thermalizing. □

Appendix B. Quantum derivation of generalized Jarzynski equalities

The results in section 3.2 apply to classical and quantum systems. To shed extra light on quantum applications, we present an alternative derivation of lemma 2 for a quantum system whose energy spectrum is discrete and that lacks contact with the heat bath while its Hamiltonian changes.

Work is defined as the difference between the outcomes of energy measurements near the protocol's start and end. This definition of work, which appears in [5, 13, 15], differs from the definition in [6]. The discrete version of ${\chi }_{\mathrm{rev}}^{\varepsilon }(\beta )$ will be defined via analogy with equation (12):

Equation (B1)

Let ${\mathcal{S}}$ denote a quantum system characterized by an external parameter ${\lambda }_{t}$ and governed by a Hamiltonian $H({\lambda }_{t})$ whose energy spectrum is discrete. Let β denote the inverse temperature of the heat bath with which ${\mathcal{S}}$ interacts at times $t\in (-\infty ,-\tau )$. At $t=-\tau $, ${\mathcal{S}}$ is projectively measured in the energy eigenbasis, then isolated from the bath. Until $t=\tau $, a unitary $U(2\tau )$ evolves $H({\lambda }_{t})$ to ${H}_{\tau }$, and ${\mathcal{S}}$ is perturbed out of equilibrium. At $t=\tau $, the energy of ${\mathcal{S}}$ is measured projectively. Define the work W performed on ${\mathcal{S}}$ as the difference between the measurements' outcomes.

Lemma. The epsilon-required work satisfies

Equation (B2)

Proof. Let $\{| {\phi }_{m}(-\tau )\rangle \}$ and $\{{E}_{m}(-\tau )\}$ denote the eigenstates and eigenvalues of $H({\lambda }_{-\tau })$, and let $\{| {\phi }_{n}(\tau )\rangle \}$ and $\{{E}_{n}(\tau )\}$ denote those of $H({\lambda }_{\tau })$. If the measurements yield outcomes m and n, the forward trial consumes $W\equiv {E}_{n}(\tau )-{E}_{m}(-\tau )$.

The time-reversed protocol proceeds from $t=\infty $ to $t=-\infty $ and is defined as in section 2.1. Let ${p}_{n}(\tau )$ denote the probability that the first measurement during a reverse trial yields ${E}_{n}(\tau )$; and let ${p}_{\mathrm{rev}}(m| n)$ denote the probability that, if the first measurement yields ${E}_{n}(\tau )$, the second yields ${E}_{m}(-\tau )$. By definition,

Equation (B3)

wherein

Equation (B4)

Invoking ${p}_{n}(\tau )={{\rm{e}}}^{-\beta {E}_{n}(\tau )}/{Z}_{\tau }$, we cancel the ${E}_{n}(\tau )$-dependent exponentials. ${p}_{\mathrm{rev}}(m| n)$ equals the probability ${p}_{\mathrm{fwd}}(n| m)$ that, if an energy measurement at $t=-\tau $ yields ${E}_{m}(-\tau )$ during a forward trial, an energy measurement at τ yields ${E}_{n}(\tau )$:

Equation (B5)

Substitution into equation (B3) yields

Equation (B6)

Upon multiplying by ${Z}_{-\tau }/{Z}_{-\tau }$, we replace ${{\rm{e}}}^{-\beta {E}_{m}(-\tau )}/{Z}_{-\tau }$ with ${p}_{m}(-\tau )$:

Equation (B7)

The final equality follows from $F(\gamma )=-T\mathrm{log}Z$ and from the definition of epsilon. □

An analogous argument yields equation (9) in lemma 2.

Appendix C. Details of the work-extraction game

C.1. Description of the game

Let us briefly review the bound presented by Egloff et al [18]. Consider the most efficient transformation $(\rho ,{H}_{\rho })\mapsto (\sigma ,{H}_{\sigma })$ that has a probability $1-\delta $ of failing. That is, one sacrifices the certainty that the transformation will succeed, in hopes of extracting more work than can be gained from a certain-to-succeed transformation. All work that the system can output is collected; none is wasted. The transformation has a probability $1-\delta $ of outputting at least the work

Equation (C1)

We will briefly overview the geometric definitions of Gibbs-rescaling (GT) and the relative mixedness (M).

Let ρ have the spectral decomposition ${\displaystyle \sum }_{i=1}^{{d}_{\rho }}{r}_{i}| {E}_{i}\rangle \langle {E}_{i}| $ such that

Equation (C2)

Consider the histogram that represents the ri. Gibbs-rescaling ρ resizes each box in the histogram. The width of box i changes from unity to ${{\rm{e}}}^{-\beta {E}_{i}}$, and the box's height increases by a factor of ${{\rm{e}}}^{\beta {E}_{i}}$. Denote by ${h}_{\rho }^{T}(u)$ the height of the point, on the rescaled histogram, whose x-coordinate is $u\in [0,Z({H}_{\rho })]$. Integrating ${h}_{\rho }^{T}(u)$, we define the Gibbs-rescaled Lorenz curve as the set of points

Equation (C3)

The (unscaled) Lorenz curve ${L}_{\rho }$ is equivalent to ${L}_{\rho }^{0}$. Upon Gibbs-rescaling ρ and σ, we can compare the states' resourcefulness even though different Hamiltonians govern the states.

To incorporate the failure probability into the curve, we stretch ${L}_{\rho }^{T}$ upward by a factor of $1/(1-\delta )$. The resulting curve, ${L}_{\rho }^{T,\delta }$, encodes more reliable resourcefulness than $(\rho ,{H}_{\rho })$ possesses, because extractable work trades off with the failure probability δ. Consider plotting ${L}_{\rho }^{T,\delta }$ on the same graph as ${L}_{\sigma }^{T}$. The curves are concave, bowing outward from the x-axis or stretching straight from $(0,0)$ to y = 1. Consider compressing ${L}_{\sigma }^{T}$ leftward. M denotes the inverse of the greatest factor by which ${L}_{\sigma }^{T}$ can compress without popping above ${L}_{\rho }^{T,\delta }$:

Equation (C4)

Illustrations appear in [18]. While transforming $(\rho ,{H}_{\rho })$ into $(\sigma ,{H}_{\sigma })$, the player can extract no more work than $T\mathrm{log}M$: according to equation (C1),

Equation (C5)

C.2. Tightening and generalizing a one-shot bound with Crooks' theorem

Theorem 5 strengthens the above inequality (C5) (in the appropriate parameter regime), and theorem 6 generalizes this to work investment. These theorems are proved below:

Theorem. The work δ-extractable from each Crooks-type reverse trial satisfies

Equation (C6)

Proof. During the reverse protocol, the state of a system ${\mathcal{S}}$ transforms as

Equation (C7)

wherein σ denotes some density operator that likely is not an equilibrium state.

The set of strategies for transforming from $({\gamma }_{\tau },{H}_{\tau })$ to $({\gamma }_{-\tau },{H}_{-\tau })$ contains all strategies that achieve this transformation via $(\sigma ,{H}_{-\tau })$. As such, when optimizing to maximize the work production allowing an error rate of δ,

Equation (C8)

The final term (which only involves thermalization) always succeeds, does not contribute either to the failure probability or the work cost, and hence can be eliminated. An arbitrary, possibly suboptimal, strategy from $({\gamma }_{\tau },{H}_{\tau })$ to $(\sigma ,{H}_{-\tau })$ that generates work ${w}^{\delta }$ will be bounded by this value:

Equation (C9)

Hence, by equation (C1),

Equation (C10)

Let us calculate M. ${L}_{{\gamma }_{-\tau }}^{T}$ stretches straight from $(0,0)$ to $({Z}_{-\tau },1)$, whereas ${L}_{{\gamma }_{\tau }}^{T}/(1-\delta )$ stretches straight to $({Z}_{\tau },\frac{1}{1-\delta })$. Compressing ${L}_{{\gamma }_{\tau }}^{T}(u)/(1-\delta )$ leftward by a factor of ${M}^{-1}=\frac{{Z}_{\tau }(1-\delta )}{{Z}_{-\tau }}$ keeps the latter curve from dipping below ${L}_{{\gamma }_{-\tau }}^{T}(u)$. Hence

Equation (C11)

wherein ${\rm{\Delta }}F\equiv F({\gamma }_{\tau })-F({\gamma }_{-\tau })$. We substitute from equation (C11) into the inequality (13) derived from Crooks' theorem in theorem 3. □

Crooks' theorem introduces an ${H}_{\infty }$ into the bound, derived from [18], on extractable work. If ${P}_{\mathrm{fwd}}$ satisfies inequality (18), this ${H}_{\infty }$ strengthens the bound. A work cost can similarly be derived from [18], then enhanced with Crooks' theorem.

Appendix D. Details of thermodynamic resource theories

D.1. Description of thermodynamic resource theories

Each thermodynamic resource theory models energy-preserving transformations performed with a heat bath characterized by an inverse temperature β. To specify a state, one specifies a density operator and a Hamiltonian: $(\rho ,H)$. Sums of Hamiltonians will be denoted by ${H}_{1}+{H}_{2}\equiv {H}_{1}\otimes {\mathbb{1}}+{\mathbb{1}}\otimes {H}_{2}$.

Thermal operations can be performed for free. Each consists of three steps: (1) a Gibbs state relative to β and to any Hamiltonian ${H}_{\gamma }$ can be drawn from the bath:

Equation (D1)

(Below, the Gibbs state relative to β and to ${H}_{\gamma }$ will be denoted also by $\gamma ({H}_{\gamma })$.) (2) Any unitary U that conserves the total energy can be implemented, and (3) any subsystem A associated with its own Hamiltonian HA can be discarded. Each thermal operation on $(\rho ,H)$ has the form

Equation (D2)

wherein $[U,H+{H}_{\gamma }]=0$.

D.2. Applicability of Crooks' theorem

Some resource-theory operations obey detailed balance and Markovianity. We can use resource theories to model processes governed by Crooks' theorem if we define a battery and a clock. Our model for the battery appears in [23] and resembles the model in [21]. The quasiclassical battery B has closely spaced energy levels and occupies an energy eigenstate:

Equation (D3)

If ${E}_{{B}_{i}}$ is large, a work-costing (forward) process can transfer work from the battery to ${\mathcal{S}}$. If ${E}_{{B}_{i}}$ is small, a work-extraction (reverse) process can transfer work from ${\mathcal{S}}$ to the battery.

We model the evolution of H with a clock C that occupies a pure state $| {C}_{j}\rangle $ [20, 26]. The changing of $| {C}_{j}\rangle $, like the movement of a clock hand, models the passing of instants. In processes governed by Crooks' theorem, $H=H({\lambda }_{t})$. We discretize t such that the system's Hamiltonian is $H({\lambda }_{{t}_{j}})$ when the clock occupies the state $| {C}_{j}\rangle $. In the notation introduced earlier, ${t}_{1}=-\tau $, and ${t}_{n}=\tau $. The composite-system Hamiltonian

Equation (D4)

remains constant.

Having defined the battery and clock, we define the work extractable from, and the work cost of, a protocol. Let ${E}_{{B}_{0}}=0$. The most work extractable from the reverse protocol equals the greatest ${E}_{{B}_{m}}$ for which some sequence of thermal operations evolves the state of ${\mathcal{S}}{CB}$ as

Equation (D5)

wherein $\rho ({t}_{i})$ represents the state occupied by ${\mathcal{S}}$ at time ti. The forward protocol's minimum work cost equals the least ${E}_{{B}_{n}}$ for which a sequence of thermal operations implements

Equation (D6)

Results in [20] can be applied if all the states commute with their Hamiltonians. Horodecki and Oppenheim have calculated the maximum work yield, or minimum work cost, of any quasiclassical transformation $(\rho ,{H}_{\rho })\mapsto (\sigma ,{H}_{\sigma })$ by thermal operations. They have calculated also faulty transformations' work yields and work costs. A faulty transformation generates a state $({\sigma }^{\prime },{H}_{\sigma })$ that differs from the desired state. The discrepancy is quantified by the trace distance between the density operators:

Equation (D7)

According to [20], this epsilon can be interpreted as the probability that the process fails to accomplish its mission, similarly to epsilon and δ. Generalizations to nonclassical states appear in [38, 39].

Appendix E. Details of numerical simulation

Our simulation of Landauer bit reset and Szilard work extraction resembles the scenario presented in [29], that models a two-level quasiclassical system ${\mathcal{S}}$. At each time t, the energy ${\mathcal{E}}(t)$ of ${\mathcal{S}}$ equals E0 or ${E}_{1}(t)$. The state of ${\mathcal{S}}$ is represented by a vector $\vec{s}(t)=(p(t),1-p(t))$, wherein $p(t)$ equals the probability that ${\mathcal{E}}(t)={E}_{0}$.

If observers have different amounts of information about ${\mathcal{E}}(t)$, they ascribe different values to $p(t)$. Suppose an agent draws ${\mathcal{S}}$ from a temperature-$(1/\beta )$ heat bath. According to this ignorant agent,

Equation (E1)

According to an omniscient observer, $\vec{s}(t)=(1,0)$ or $(0,1)$. The code is written from the perspective of an omniscient observer. On average, the code's predictions coincide with the predictions that code written by an ignorant agent would make.

While $t\in (-\infty ,-\tau )$ during the forward (erasure) protocol, ${E}_{1}(t)={E}_{0}=0$, and ${\mathcal{S}}$ is thermally equilibrated. According to the ignorant agent, $\vec{s}(t)=(\displaystyle \frac{1}{2},\displaystyle \frac{1}{2})$. Beginning at $t=-\tau $, the agent raises E1 by the infinitesimal amount dE while preserving $\vec{s}(t)$. Then, the agent couples ${\mathcal{S}}$ to the bath for some time interval. The raising and coupling are repeated until $t=\tau $ and ${E}_{1}(\tau )={E}_{\mathrm{max}}$.

The agent's actions are simulated as follows: our code has a probability $\displaystyle \frac{1}{2}$ of representing the initial state $\vec{s}(-\tau )$ with $(1,0)$ and a probability $\displaystyle \frac{1}{2}$ of representing $\vec{s}(-\tau )$ with $(0,1)$. Consider one thermal interaction that occurs at some $t\in (-\tau ,\tau )$. If $\vec{s}(t)=(1,0)$ before the thermal interaction, the agent invests no work to raise E1. If $\vec{s}(t)=(0,1)$, the agent invests work dE.

A probabilistic swap models each interaction with the heat bath [29]. $\vec{s}(t)$ has a probability ${P}_{\mathrm{swap}}$ of being exchanged with a pure state sampled from a Gibbs distribution. That is, $\vec{s}(t)$ has a probability ${P}_{\mathrm{swap}}{{\rm{e}}}^{-\beta {E}_{0}}/Z(t)$ of being interchanged with $(1,0)$, a probability ${P}_{\mathrm{swap}}{{\rm{e}}}^{-\beta {E}_{1}(t)}/Z(t)$ of being interchanged with $(0,1)$, and a probability $1-{P}_{\mathrm{swap}}$ of remaining unchanged: $\vec{s}(t+{\rm{d}}t)=\vec{s}(t)$. The longer ${\mathcal{S}}$ couples to the reservoir, the greater the ${P}_{\mathrm{swap}}$. The ignorant agent represents this thermalization with $\vec{s}(t+{\rm{d}}t)=M(t;{P}_{\mathrm{swap}})\vec{s}(t)$, wherein $M(t;{P}_{\mathrm{swap}})$ is a thermalizing matrix that obeys detailed balance (see the proof of lemma 12). Because $\vec{s}(t+{\rm{d}}t)$ depends on no earlier state except $\vec{s}(t)$, the evolution is Markovian.

Ideally, the agent increases ${E}_{1}(t)$ and thermalizes ${\mathcal{S}}$ repeatedly until $t=\tau $, ${E}_{1}(\tau )=\infty $, and $\vec{s}(\tau )=(1,0)$ according to both observers. The simulated ${E}_{1}(t)$ peaks at some large ${E}_{\mathrm{max}}$, and the final state has a high probability of being $(1,0)$ [29]. During stage two of erasure, ${\mathcal{S}}$ is thermally isolated, and E1 decreases to zero. Because $\vec{s}$ has no weight on E1, this stage costs no work.

Reversing erasure amounts to extracting work. Initially, ${E}_{1}={E}_{0}=0$, and $\vec{s}=(1,0)$. As ${\mathcal{S}}$ remains thermally isolated, E1 rises to infinity (approximated by ${E}_{\mathrm{max}}$) without consuming work. During stage two of work extraction, the agent repeatedly lowers ${E}_{1}(t)$ by dE and thermalizes ${\mathcal{S}}$. Whenever the agent lowers ${E}_{1}(t)$ while $\vec{s}(t)=(0,1)$ to the omniscient observer, ${\mathcal{S}}$ outputs work ${\rm{d}}E$. Once $t=-\tau $ such that ${E}_{1}(-\tau )=0$, ${\mathcal{S}}$ thermalizes until the probability that $\vec{s}(t)=(1,0)$ equals the probability that $\vec{s}(t)=(0,1)$.

To produce figure 3, we simulated a bit-reset process where the energy gap between the two levels increases linearly from 0 to $40\;{k}_{{\rm{B}}}T$ across 100 000 equal time-steps. After each step, the partial swap probability of thermalizing is 0.002, and $\beta =10\;{k}_{{\rm{B}}}^{-1}{K}^{-1}$. This process was repeated 10 000 times, and the resulting work values binned into a histogram with 50 divisions. The maximum probability for the bit-reset work distribution was ${P}^{\mathrm{max}}=8.60$, satisfying ${P}^{\mathrm{max}}\lt \beta $.

Footnotes

  • W can be continuous even if the system has a discrete energy spectrum: consider a system that interacts with the heat bath while two (or more) energy levels shift at different rates. The system can jump from one level to another at any time, due to thermalization. The work cost of a trajectory that jumps at time t can differ infinitesimally from the work cost of a trajectory that jumps at time $t+{\rm{d}}t$.

  • For the standard formulation $H(X)=-\int {\rm{d}}{XP}(X)\mathrm{log}P(X)$, dimensions have been ignored, and the implicit reference is a uniform distribution with width 1 and probability density 1. The unnormalized expression is ${\mathrm{lim}}_{{\rm{d}}X\to 0}-{\displaystyle \sum }_{X}{\rm{d}}{XP}(X)\mathrm{log}[P(X){\rm{d}}X]$, which diverges due to the dX in the logarithm.

  • One-shot entropies of continuous probability density functions are known to assume negative values [40] when they are less entropic than the implicit reference distribution.

  • By non-negative, we mean that every element of M is no less than zero.

  • Though quasicycles look cyclic, they are aperiodic. For the purposes of the Perron–Frobenius Theorem, a matrix's period is the maximum value ${k}_{\mathrm{max}}$ of k that satisfies the statement 'a system prepared in level A has a nonzero probability of evolving to level A only (but not necessarily) after multiples of k steps'. For irreducible matrices, the possible values of k do not depend on the choice of A [52]. When this index ${k}_{\mathrm{max}}=1$, the matrix is aperiodic. Every quasicycle contains at least one node that transitions to itself (not all ${p}_{i}=1$, so at least one loop satisfies $P(i\mapsto i)=(1-{p}_{i})\gt 0$). Thus any value of k satisfies the above statement. Hence ${k}_{\mathrm{max}}=1$, and quasicycles are aperiodic.

  • For strictly positive matrices, the earlier Perron theorem implies the same result.

Please wait… references are loading.
10.1088/1367-2630/17/9/095003