Optimal networks for quantum metrology: semidefinite programs and product rules

We investigate the optimal estimation of a quantum process that can possibly consist of multiple time steps. The estimation is implemented by a quantum network that interacts with the process by sending an input and processing the output at each time step. We formulate the search for the optimal network as a semidefinite program and use duality theory to give an alternative expression for the maximum payoff achieved by estimation. Combining this formulation with a technique devised by Mittal and Szegedy we prove a general product rule for the joint estimation of independent processes, stating that the optimal joint estimation can be achieved by estimating each process independently, whenever the figure of merit is of a product form. We illustrate the result in several examples and exhibit counterexamples showing that the optimal joint network may not be the product of the optimal individual networks if the processes are not independent or if the figure of merit is not of the product form. In particular, we show that entanglement can reduce by a factor K the variance in the estimation of the sum of K independent phase shifts.


Introduction
Quantum theory offers impressive advantages over classical theory in the estimation of physical parameters [1,2,3,4,5,6,7,10,11,12,13].The prototypical example is the estimation of an unknown phase shift [3,4,11,12]: here the variance vanishes as N −2 with the number N of accesses to the phase-shifting process, whereas a classical statistics over independent copies would give the scaling N −1 .The quadratic improvement is achieved by preparing an entangled state of N systems and applying the unknown process to each system.The same quadratic advantage can be found in the estimation of a direction in space [5,6] and in the joint estimation of three Cartesian axes [7,8,9].
Given the usefulness of entanglement in the estimation of a single parameter from multiple accesses to a physical process, it is natural to ask whether entanglement can improve the estimation of many parameters corresponding to different processes.For example, one may wonder whether entanglement can help in the estimation of two independent phase shifts.In a slightly different context, this type of question was originally addressed by Wootters in an unpublished work and by DiVincenzo, Terhal, and Leung [14], who asked whether a joint entangled measurement could improve the extraction of information about two bits encoded in two independent sets of states.In this scenario, it was shown that that the amount of information that can be extracted from the product set is additive [14].More recently, a different proof showing the optimality of product measurements for the extraction of information from general product sets of states was provided in Ref. [15].
In this paper we address the problem of the joint estimation of the parameters encoded in a set of independent processes, where each process can consist of several time steps.Due to the possibility of connecting an input of an unknown process with the output of another one, here the question whether quantum correlations can improve the estimation is not only a question about the usefulness of entanglement in the input states and in the measurements, but also a question about the usefulness of quantum correlations in time, namely correlations mediated by the exchange of quantum systems from one time step to the next.We address the question in the framework of quantum estimation [16,17], where the figure of merit is the expected payoff associated to a payoff function g(x, x), which depends of the true value x and of estimated value x labelling the unknown process.In order to tackle the question we formulate the optimization of the quantum network for the estimation of an unknown multi-time process as a semidefinite program and we discuss the corresponding dual problem.In this context we prove a general product rule, showing that the optimal joint estimation of a set of independent parameters x := (x 1 , . . ., x K ) can be achieved by estimating each parameter independently whenever the figure of merit if of the product form g(x, x) = K k=1 g k (x k , x k ), where g k is the payoff function for the parameter x k .In particular, our result implies that the maximum probability of success in identifying a set of unknown processes is the product of the maximum probabilities of success in identifying each individual process separately.
Product theorems are a key tool in theoretical computer science [18,19,20,21,22,23,24], where one is often interested in how the resources needed to solve several independent problems jointly are related to the resources needed to solve each problem individually.Our work begins to explore the usefulness of this techniques in the domain of physics, starting from the fundamental problem of identifying a set of independent physical parameters.In order to prove our result we use the framework of quantum combs [25,26] (see also the work by Gutoski and Watrous on quantum strategies [27]).As we already mentioned, in this framework we formulate the maximization of the expected payoff as a semidefinite program, and present an intuitive formulation of the dual minimization program.Such a dual formulation is interesting in its own right, as it generalizes to arbitrary processes and arbitrary payoff functions a classic formula derived by Yuen, Kennedy, and Lax [28] for the minimum error state discrimination.Exploiting the form of the primal and dual programs, we then prove our product theorem following a general technique devised by Mittal and Szegedy in Ref. [23] (see also Ref. [24]), which is adapted here in order to deal with the optimization of quantum networks consisting of multiple time steps.

Quantum networks for process estimation
Suppose that an experimenter has access to a physical process P x that depends on an unknown parameter x in some parameter space X.The goal of the experimenter is to determine the parameter x with the maximum precision allowed by the laws of quantum mechanics.
Generally, the process P x can consist of N time steps, labelled by an index s in some finite set S = (s 1 , . . .s N ) ⊂ N, ordered so that s m < s n for m < n.At each time step s ∈ S the process transforms an input quantum system, with Hilbert space denoted by H (s) in , into a (possibly different) output quantum system, with Hilbert space denoted by H (s) out .If the process P x is memoryless, all time steps are independent and one can associate a quantum channel to each time step.The quantum channel at step s, denoted by C (s) x , will be a completely positive trace-preserving map sending density matrices on H (s) in to density matrices on H (s) out .Hence, the process P x can be described by a time-ordered sequence of quantum channels, each channel labelled by the unknown parameter x, as in the following picture: In the easiest case, one may have the same channel at each time step, namely C (s) x = C x for every s ∈ S.This is the case, e.g. of quantum phase estimation [3,4,10,11,12,13], where one has access to N uses of the unitary channel C x = U x ρU † x , with U x = exp(ixH) for some Hamiltonian H with integer spectrum.
In the presence of memory, the input-output transformation at the step s is described by a quantum channel involving internal ancillas: in this case the quantum channel C (s) x transforms density matrices on where A s is the Hilbert space of the s-th ancilla.Hence, the process C x is represented by a time-ordered sequence of black boxes with internal memories: Note that, since the ancillas are internal to the network, the first and last ancillary systems are trivial A 0 ≃ A N ≃ C.
The most general strategy to estimate an unknown parameter from a time-ordered sequence of black boxes consists in inserting them in a quantum network where they are interspersed with known quantum gates and eventually a quantum measurement is performed on the output, producing the estimate x ∈ X.
The estimation process can be depicted as where B s , s ∈ S are the internal ancillas of the estimating network, Ψ is a quantum state on in , each U s is a quantum channel, and P x is a quantum measurement, described by a positive operator valued measure (POVM) on the Hilbert space B s N ⊗ H Examples of quantum networks for the estimation of unknown parameters can be found in Refs.[11,12].

Optimizing quantum networks: the method of quantum combs
A convenient way to optimize quantum networks is the method of quantum combs [25,26] (see also the work on quantum strategies by Gutoski and Watrous [27]), which associates positive operators to sequential quantum networks.Here we briefly summarize some known basic facts about this method, referring the reader to the original papers for the proofs and for further details.
In the following we will use the following notation: Lin(H) will denote the set of linear operators on a (finite-dimensional) Hilbert space H, Lin + (H) will denote the set of positive operators on H, while St(H) will denote the set of density matrices on H, that is, the set of positive operators ρ ∈ Lin + (H) such that Tr[ρ] = 1.

Quantum combs.
A sequential network of quantum channels with internal memories can be associated with a non-negative operator satisfying suitable linear constraints.Precisely, a network of the form Optimal networks for Quantum Metrology: semidefinite programs and product rules 5 is associated with a positive operator R ∈ Lin + s∈S H . The fact that the network consists of quantum channels (trace-preserving maps) imposes the following constraint: there must exist a set of positive operators R . . .
where Tr out,s and I in,s denote the partial trace over out and the identity operator on H (s) in , respectively [27,25,26].
Most importantly, the converse also holds [27,25,26]: if a positive operator R satisfies the constraints of Eq. ( 3) for some set of positive operators R (n) , n = 1, . . ., N − 1, then there exists a network of the form of Eq. ( 2) such that the operator associated to that network is R.This is important because it implies that optimizing over quantum networks is completely equivalent to optimizing over positive operators R satisfying Eq. ( 3).In fact, given an operator R satisfying there is a constructive algorithm to build up the channels C (s) at all time steps s ∈ S [29].In the following, a positive operator satisfying Eq. ( 3) for some operators R (n) , n = 1, . . ., N − 1 will be called quantum comb.We will denote the set of quantum combs with a prescribed number of time steps and prescribed input and output Hilbert spaces as Comb s∈S H (s) More generally, a quantum network can contain measurements: at each time step s one can have a measurement with outcome m s in some set M s .Conditionally to the outcome m s , the input system will undergo a transformation, represented by a completely positive trace non-increasing map ms , with the condition that the sum over all outcomes C (s) := ms∈Ms C (s) ms is trace-preserving.A sequential network containing measurements, such as the network can be associated with a collection of positive operators with the property that the sum over all outcomes T := m∈M T m satisfies Eq. ( 3).We call such a collection of operators a quantum tester.It is possible to prove that, if a collection positive operators T = {T m | m ∈ M} is a quantum tester, then there exists a quantum network of the form such that T is the tester associated to that network [27,25,26].Note that here the measurement takes place only in the last step, while the boxes C (sn) , n = 1. . . ., N − 1 represent quantum channels.
A particular type of testers are those where the first and last quantum systems are trivial [H out ≃ C in Eq. ( 4)].These testers represent quantum networks that start with a state preparation and end with a POVM measurement.These are exactly the networks that are interesting for the estimation of quantum processes, as depicted in Eq. ( 1): note that to test a process consisting of N time steps we need tester consisting of N + 1 time steps.Labelling the Hilbert spaces as in the following diagram the normalization of the tester T becomes . . .
for some set of positive operators Ξ

If we test a process represented by the quantum comb
with a network represented by the tester T := {T m | m ∈ M}, then we obtain a probability distribution p(m|R (N ) ) over all possible outcomes.Such a probability distribution is given by the generalized Born rule of Refs.[25,26]: Here the quantum comb R plays the role of the density matrix in the ordinary Born rule, and the tester {T m | m ∈ M} plays the role of the POVM measurement.In fact, the ordinary Born rule can be retrieved as a special case of Eq. ( 7), corresponding to the case of state preparation processes, namely processes that consist of a single time step (N = 1) with no input system (H in ≃ C).In that special case, the normalization of the quantum comb, given by Tr out,s 1 [R] = I in,s 1 becomes Tr[R] = 1, which is the normalization of a density matrix, while the normalization of the tester, given by m∈M T m = I out,s 1 ⊗ Ξ (1) , Tr[Ξ (1) ] = 1, becomes m∈M T m = I out,s 1 , which is the normalization of a POVM.

The optimization problem of Quantum Metrology
In process estimation one has a parametric family of processes with a given input-output structure and with a fixed number of time steps N labelled by an index s ∈ S ⊂ N .Each process is described by a quantum comb , where x ∈ X is the parameter to be estimated.Let us denote by π(x) the probability that the unknown parameter has the value x.If x has a continuum of values, p(x) will represent the probability density of x with respect to some measure x . .For simplicity in the following we will present the results in the discrete case, but it is important to bear in mind that these results hold also in the continuous case, just replacing sums with integrals and replacing the quantifier "∀x ∈ X" with "∀x ∈ X except at most for a set of zero measure".

Primal maximization problem
For an estimation strategy described by the quantum tester T := {T x | x ∈ X}, the probability distribution p(x|x) is given by Eq. (7).In order to evaluate the performance of a given strategy, we introduce a payoff function g(x, x), which quantifies the gain [or the loss, when the value of g(x, x) is negative] obtained by estimating x when the actual value is x.In the following, we will require that the payoff function is positive, that is, Clearly, this assumption can be made without loss of generality as long as the payoff is lower bounded (that is, as long as there is a limit to the losses).The expected payoff, averaged over the possible true values, is then given by An example of payoff function is g(x, x) = δ x,x , which gives a unit gain if and only if the estimated value x coincides with the true value x.In this case the average gain coincides with the average probability of guessing the correct value A tester T is optimal if it achieves the maximum payoff, defined as . . .

Dual minimization problem
Maximizing the payoff in Eq. ( 10) is a semidefinite program.Using duality theory we now give a useful expression for the maximum payoff: Theorem 1 The maximum payoff is given by where G x is defined as in Eq. ( 9).
The proof of the theorem, given in the Appendix, follows the same lines used by Gutoski [35] to prove strong duality for the minimum error discrimination of two quantum processes, which the special instance of our problem corresponding to X := {0, 1} and g(x, x) = δ x,x .Here we illustrate the result of theorem 1 in a few special examples.

Examples
4.3.1.State estimation.State estimation can be viewed as a special case where the unknown process P x to be estimated consists only in the preparation of a quantum state ρ x ∈ Lin + (H) (that is, when there is only one time step N = 1, the output Hilbert space is H (s 1 ) out = H, and the input Hilbert space is trivial in ≃ C ).In this case, the expression (10) becomes with 4.3.2.Minimum error state discrimination.If g(x, x) = δ x,x , the maximum payoff γ max coincides with the maximum probability of guessing the correct value p max succ , so that maximizing the payoff is equivalent to minimizing the error probability.In this special case we retrieve from Eq. ( 11) the classic expression by Yuen, Kennedy, and Lax [28] (see also [30,31]) [the above expression follows from Eq. ( 11) with the definition Λ := λρ].

4.3.3.
State estimation/discrimination in the group covariant case.The dual expression for the maximum payoff has an interesting interpretation in the presence of symmetry.Let us first consider a simple case of state discrimination, where X is a finite group, the prior probability π is uniform, that is, π(x) = 1/|X|, and the unknown state ρ x is given by ρ x , where ρ 0 ∈ St(H) is a fixed state and U : X → Lin(H), x → U x is a projective unitary representation of the group X.In this case, it is easy to show that the minimization over Λ = λρ in Eq. ( 12) can be restricted without loss of generality to invariant states, satisfying U x ρU † x = ρ, ∀x ∈ X.Hence, we have By definition, q max is the maximum probability that ρ 0 can have in an ensemble decomposition of an invariant state ρ, optimized over all possible invariant states.The probability q max ranges between 1/|X| and 1. Intuitively, q max can be interpreted as a measure of how symmetric the state ρ 0 is: for q max = 1 the state ρ 0 is invariant, while for q max = 1/|X| the state ρ 0 generates a family of orthogonal states ρ x = U x ρ 0 U † x .The result can be easily extended to the case of arbitrary payoff functions that are left-invariant under the action of the group, that is, functions g satisfying the condition g(y x, yx) = g(x, x), ∀x, x, y ∈ X.Moreover, the expression of Eq. ( 13) can be generalized to a form that holds also for continuous groups: Corollary 1 Let X be a compact group, g : X × X → R be a left-invariant payoff function, and ρ x be the quantum state ρ x := U x ρ 0 U † x , where U : x → U x is a unitary representation of the group X.If the prior probability is given by the Haar measure x ., then the maximum average payoff over all quantum measurements is given by where e ∈ X denotes the identity element in the group X.
Proof.Using the invariance of the Haar measure and of the payoff function it is easy to check that G x = U x(γ 0 σ 0 )U † x. Using this fact, we can restrict the minimization in Eq. (11) to invariant states ρ satisfying the condition λρ ≥ γ 0 σ 0 .Finally, defining q := γ 0 /λ we can transform the minimization over λ into a maximization over q, thus proving the thesis.

Binary discrimination of multi-time quantum processes
The discrimination of two multi-time processes P 0 and P 1 corresponds to the special case where X = {0, 1}.In this case, the maximum probability of successful discrimination defines an operational norm in the real vector space generated by quantum processes [34,35].For prior probabilities π 0 and π 1 , the probability of success and the norm are linked by the relation [34] which generalizes the well-known expression by Helstrom [16] for the optimal discrimination between two quantum states.In the binary case the dual expression for the maximum success probability given by theorem 1 coincides with the dual expression presented by Gutoski in Ref. [35].
4.3.5.Process estimation/discrimination in the group covariant case.Consider the case of a general process P x consisting of N time steps.Suppose that P x has the form , where P 0 is a fixed process and is a unitary quantum channel representing the action of the group on the input (output) system at the s-th time step.
Denoting by R x and R 0 the quantum combs corresponding to the processes P x and P 0 , it is possible to show that R x = s∈S V (s) x ⊗ U with respect the computational basis [32].The result of Corollary 1 can then be generalized immediately to the case of general processes: Corollary 2 Let X be a compact group, g : X × X → R be a left-invariant payoff function, and let ρ x be the quantum state ρ x := U x ρ 0 U † x , where U : x → U x is a unitary representation of the group X.If the prior probability is given by the Haar measure x ., then the maximum average payoff over all quantum measurements is given by where e ∈ X denotes the identity element in the group X.
Proof.Same proof as for corollary 1.

Product rule for the estimation of independent processes
Imagine that we have K processes, where each process P k,x k corresponds to a quantum network as in figure (2) and is labelled by an unknown parameter x k in some set X k , k = 1, . . ., K. For every fixed k, all the processes {P k,x k | x k ∈ X k } consist of the same number N k of time steps, which we label by an index s k in some set S k ⊂ N. At time s k , each process P k,x k will transform an input system with Hilbert space H (s k ) k,in , into an output system with Hilbert space H Let us denote by x the vectors of parameters x := (x 1 , . . ., x K ) ∈ X := X 1 ×• • •×X K .We say that the K processes {P k,x k | k = 1, . . ., K} are independent when • two processes P k,x k = P l,x l with k = l correspond to two disconnected quantum networks for every x k ∈ X k and for every x l ∈ X l • the prior distribution of the parameters factorizes as where π k is the prior distribution for the parameter x k .
For example, the different parameters could be K independent and uniformly distributed phase shifts.
Suppose that we want to estimate parameter x labelling the joint process P x and that our figure of merit is given by the payoff function g(x, x).If we are interested in each parameter independently, then the payoff function for the estimation of the vector x is the product of the payoff functions for the estimation of its components: where the notation g k ≥ 0 means g(x k , x k ) ≥ 0, ∀x k , x k ∈ X k .For example, the payoff function could give a unit reward only when all the parameters are guessed correctly, so that g(x, x) = δ x,x = K k=1 δ xk ,x k .Note that, in order to have a meaningful figure of merit for the estimation of the vector x, it is important to have g n ≥ 0 for every n: otherwise, the product of two negative gains (i.e. of two losses) for two different parameters would count as a positive gain for the joint estimation of the vector x.
Based on the hypotheses of independence of the processes and on the product form of the payoff function we can prove the following theorem: Theorem 2 (Product rule for the estimation of K independent processes) Let P k,x k , k = 1, . . ., K be K independent processes, each process labelled by an unknown parameter x k ∈ X k with prior probability π k (x k ).Then for a payoff function g(x, x) of the product form of Eq. ( 15) the maximum payoff for the estimation of x is given by the product of the maximum payoffs for the the estimation of its components: where γ max,k is the maximum payoff achievable in the estimation of x k .
In other words, the optimal estimation of the vector x can be achieved by estimating each component x k independently.
Proof.Clearly, we have γ max ≥ K k=1 γ max,k , because restricting to product strategies can only reduce the maximum payoff.To prove the converse we use the dual minimization problem of Theorem 1, in which restricting to product combs can only increase the minimum.
Let R k,x k be the quantum comb representing the process P k,x k and let R x = K k=1 R k,x k be the quantum comb representing the process P x = K k=1 P k,x k .Let us introduce the notation With this notation we have that R k,x k and R x belong to C k and C, respectively.Define the positive operators Then, by theorem 1 we have Here, the second inequality comes from the fact that if 5.0.6.Relation with the product rules by Mittal and Szegedy.The technique used to prove that the optimal payoff is of the product form is directly inspired by a result by Mittal and Szegedy on product rules for semidefinite programming [23].However, our result is not a direct application of the theorem in Ref. [23], which concerns product programs, where the linear constraint for the product program is the tensor product of the linear constraints for the individual programs.The theorem is not directly applicable in our case because in the joint estimation of K processes the linear constraint of Eq. ( 10) are not the tensor product of the linear constraints for the estimation each process separately.However, the crucial point here is that the tensor product of K operators satisfying the constraints individually is an operator that satisfies the joint constraint and that this property is true both in the primal maximization problem and in the dual minimization program.
5.0.7.Example 5: minimum error discrimination of K sets of processes Theorem 2 can be applied to the case of minimum error discrimination of processes.Suppose that for every k = 1, . . ., K we have a set of processes ). Denoting by p max succ,k the maximum probability of success in correctly identifying the k-the process, and by p max succ the probability of success in correctly identifying all processes, we then have p max = p max succ,1 • • • p max succ,K .The best joint strategy for discrimination is just the product of the best individual strategies.

Counterexamples
Our theorem 2 proved the optimality of product strategies in the hypotheses that the processes are independent and that the payoff function is of a product form.Here we show that if one of these hypotheses is dropped, there are examples where the result does not hold.
5.1.1.Minimum error discrimination of two pure states with multiple copies.One of the most basic problems in quantum information is to distinguish between two nonorthogonal quantum states (see e.g. the classic textbook of Helstrom [16]).In this context, one important question is how small the probability of error can be made when a finite number of identically prepared quantum systems are available.Consider the minimum error discrimination of two pure states {ρ 0 , ρ 1 } with prior probabilities {p 0 , p 1 }, in the case where K identical copies of the unknown state are available.We can view this problem as an instance of minimum error discrimination of K perfectly correlated preparation processes, each of which prepares one of the states {ρ 0 , ρ 1 }.Denoting by p max succ (K) the probability of success with K copies, we know from the quantum Chernoff bound [33] that p max succ (K) converges to 1 exponentially fast in the limit K → ∞.On the other hand, the product of the probabilities of success, given by [p max succ (K = 1)] K tends to zero (exponentially fast) unless the two states are perfectly distinguishable.

Estimation of two independent phase shifts with a correlated payoff function.
Phase estimation is another great classic of quantum estimation theory [16,17], with applications to quantum clocks [4] and high-precision interferometry (see [10,13] for an overview of the relevant literature).In the usual scenario, one has given access to multiple queries to the same black box implementing an unknown phase shift and the question is how the precision of estimation increases with the number of queries [4,12].Here we will consider instead a different scenario: two black boxes implementing different (uncorrelated) phase shifts are given and the goal is to estimate the values of the two shifts.A priori, since the the values of the two phase shifts are independent, it could sound natural that the optimal estimation strategy consists in estimating each phase shift independently.However, in the following we will see that an arbitrarily small amount of correlation in the figure of merit used to judge the quality of the estimation can change critically the features of the optimal network, with the optimal input state changing suddenly from factorized to maximally entangled.
Let us see in detail how the example works.Consider the estimation of two independent phase shifts on two qubit systems, with Hilbert spaces H 1 and H 2 , respectively (H 1 ≃ H 2 ≃ C 2 ).Denoting by |0 and |1 the two orthonormal vectors in the standard basis for C 2 , the phase shifts on a qubit system are given by U x = |0 0| + e ix |1 1|, x ∈ [0, 2π).We assume that the phase shifts on the two qubits are uniformly distributed according to the Haar measure x ./2π.The problem is then to find the best estimate of the unknown parameter x := (x 1 , x 2 ) characterizing the black boxes U x 1 and U x 2 .As a figure of merit, we consider the maximization of the payoff function for some p ∈ [0, 1].Note that g p is a convex combination of the figure of merit cos(x 1 + x2 − x 1 − x 2 ), which quantifies how good is our estimate of the sum s := x 1 + x 2 , and of the figure of merit cos(x 1 − x2 −x 1 +x 2 ), which quantifies how good is our estimate of the difference d := x 1 − x 2 .In other words, we can interpret f as expressing the fact that, with probability p, we will be asked to estimate the sum, while with probability (1 − p) we will be asked to estimate the difference.
Due to the symmetry of the problem, is is enough to consider quantum networks where the two unknown phase shifts are applied in parallel on a suitable entangled state |E ∈ H 1 ⊗ H 2 , as proven in Ref. [34].No additional reference system is needed, because the black boxes form a unitary representation of an Abelian group [36].Hence, the problem is reduced to the optimal estimation of x from from the output state From the theory of optimal estimation of group parameters [36] we know that the optimal measurement is given by the covariant POVM Incidentally, we note that the POVM is of the product form P x = P 1,x 1 ⊗ P 2,x 2 .By direct calculation, we then find that the average value of g p is γ p = E|G p |E with Clearly, the maximum eigenvalue of G p is λ max = max{p/2, (1 − p)/2}, corresponding to the nondegenerate eigenvector ) for p < 1/2.For p = 1/2 one has degeneration, and the optimal input state can be chosen of the product form |E = |+ |+ with |+ = 2 − 1 2 (|0 + |1 ).The qualitative explanation of the behaviour is the following: For p = 1/2 the figure of merit is factorized (g 1 2 = cos( φ − ϕ) cos( ψ − ψ)) and the optimal estimation strategy can be chosen to be factorized too.For every value p = 1 2 , the degeneration is removed and suddenly the optimal input state becomes maximally entangled.The optimal input state depends in a discontinuous way from the parameter p: the (unique) optimal input state for p > 1/2 is orthogonal to the (unique) optimal input state for p < 1/2.Note, however, that there is no discontinuity in the average payoff.5.1.3.Estimating the sum of K independent phase shifts.The relation between the correlations in the figure of merit and the correlations in the optimal estimating network can also be observed in the case of multiple independent phase shifts.Suppose that we have K identical systems, with Hilbert spaces H k ≃ C N for all k = 1, . . ., K, and suppose that each system undergoes an independent phase shift U (k) x k := e ix k H (k) , where H (k) := N n=1 n |n n| for every k, {|n } being the computational basis.If we want to estimate the sum s := k x k a natural figure of merit is the minimization of the expected value of the cost function c(ŝ, s) = 2[1 − cos(ŝ − s)].This cost function is well known in the phase estimation literature as a smooth and periodic version of the variance [16,17,4,12].For small s, we have indeed ĉ(ŝ, s) ≈ (ŝ − s) 2 .Clearly, minimizing c is equivalent to maximizing the payoff function g(ŝ, s) = 1 + cos(ŝ − s).
Let us find the optimal estimation strategy.First, using the fact that the unknown black boxes form a unitary representation of an abelian group, we know that the optimal strategy consists in applying the black boxes in parallel on an entangled input state |E ∈ H ⊗K [34,36].Moreover, note that for every fixed i and j, if we apply the transformation ), then the value of the figure of merit does not change.Using this symmetry it is easy to show that the input state |E must be an eigenstate of the difference operator ∆ ij = H (i) − H (j) for every possible pair i, j.It is then straightforward that the optimal choice is |E = N n=1 e n |n ⊗K , where {e n } are suitable coefficients.The problem then becomes to estimate the sum s from the state e isn e n |n ⊗K .From the theory of optimal phase estimation we know that the minimum cost is c min = 4 sin 2 π 2N , which converges to π 2 N 2 in the limit N → ∞ (see Ref. [4]).The corresponding optimal state is the entangled state [4] and the optimal POVM is P s = |η s η s |, |η s := N n=1 e isn |n ⊗K .It is easy to see that the use of entanglement implies an advantage over factorized strategies, where each system is prepared independently in a state |e k and is measured independently with the optimal POVM.Indeed, if we choose the optimal states |e k N −1 and the optimal product POVM P where f denotes the expectation value of the function f .For large N we get the asymptotic expression c ≈ Kπ 2 N 2 .From the comparison with the optimal value c min ≈ π 2 N 2 we note that entangling K systems and performing a joint measurement implies a reduction of the variance of a factor K in the estimation of the sum.
Optimal networks for Quantum Metrology: semidefinite programs and product rules 16

Conclusions
In this paper we addressed the estimation of an unknown quantum process that can possibly consist of a finite number of time steps.We formulated the search of the optimal quantum network for estimation as a semidefinite program, and used duality theory to give an alternative expression of the maximum payoff achieved by the optimal network.Using this result we proved a product rule for quantum metrology, showing that the individual strategies are sufficient to achieve the optimal joint estimate of a set of independent processes whenever the figure of merit is of the product form.In particular, the probability of success in the discrimination of K sets of processes is the product of the probabilities of success for each set.
It is easy to see that the product rule established here for joint estimation can also be extended to the optimization of quantum networks for other tasks, such as the optimal cloning of independent sets of states and processes.In the case of pure state cloning, it has been observed in Ref. [38] that the product rule shows that the maximum global fidelity for the joint cloning of K sets of states is the product of the maximum global fidelities for each set, so that the optimal joint cloner is the product of the optimal individual cloners.Using the same type of argument, one can show that the global channel fidelity for the joint cloning of K sets of unitary gates (see Ref. [39] for the definition of the cloning task) is the product of the maximum global fidelities for each set, so that the optimal joint cloning network is the product of the optimal individual networks.
Note that S (N ) must be positive, since we have S (N ) ≥ G (N ) x ≥ 0. Consequently, S (j) must be positive for every j = 0, . . ., N.Moreover, there exists at least an operator S such that L † (S) > G.For example, one can choose The existence of an operator S such that L † (S) > G, along with the fact that the maximum payoff γ max is bounded by g max , implies that the hypotheses of Slater's