Paper The following article is Open access

Entanglement area law for shallow and deep quantum neural network states

, , , and

Published 7 May 2020 © 2020 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Zhih-Ahn Jia et al 2020 New J. Phys. 22 053022 DOI 10.1088/1367-2630/ab8262

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/22/5/053022

Abstract

A study of the artificial neural network representation of quantum many-body states is presented. The locality and entanglement properties of states for shallow and deep quantum neural networks are investigated in detail. By introducing the notion of local quasi-product states, for which the locally connected shallow feed-forward neural network states and restricted Boltzmann machine states are special cases, we show that Rényi entanglement entropies of all these states obey the entanglement area law. Besides, we also investigate the entanglement features of deep Boltzmann machine states and show that locality constraints imposed on the neural networks make the states obey the entanglement area law. Finally, as an application, we apply the notion of Rényi entanglement entropy to understand the power of neural networks, and show that image classification problems can be efficiently solved must obey the area law.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Understanding entanglement features of quantum systems is crucial for understanding many important physical phenomena. One of the most outstanding issues of entanglement features is that entanglement entropy is somehow bounded by the area of quantum systems. This idea, now as an important part of holography principle, can be applied into many different physical areas, such as topological order [13], fractional quantum hall effect [4], topological insulator and topological superconductor [5, 6], anti-de Sitter space/conformal field theory (AdS/CFT) correspondence [710] and so on.

Holography principle asserts there is a duality between the boundary quantum field theory and the bulk gravitational theory. More precisely, it claims that the d + 1 dimensional conformal field theories (CFTd+1) are equivalent to the gravitational theory on d + 2 dimensional anti-de Sitter space AdSd+2. Based on the holographic approach, Ryu and Takayanagi proved that the entanglement entropy of a subsystem $\mathcal{A}$ in CFTd+1 is related to the area of static minimal surface ${\gamma }_{\mathcal{A}}$ in AdSd+2 whose boundary matches the boundary $\partial \mathcal{A}$, the famous Ryu–Takayanagi formula [11] reads

where ${G}_{N}^{\left(d+2\right)}$ is the d + 2-dimensional Newton constant. The key point here is that the entanglement entropy is bounded by the area of the quantum system and there is a duality between geometry and entanglement. For applications in quantum many-body systems, it is now a well-known result that the ground states of local gapped quantum systems obey the entanglement area law [12, 13]: the value of entanglement entropy between a subsystem $\mathcal{A}$ and its complement ${\mathcal{A}}^{c}$ scales at most the area $\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)$ rather than the volume $\mathrm{V}\mathrm{o}\mathrm{l}\left(\mathcal{A}\right)$ of subsystem $\mathcal{A}$. It can be understood intuitively that the entanglement area law is a result of the fact that the correlations of particles in a 'natural' quantum system are usually local, thus the contribution to the entanglement entropy between $\mathcal{A}$ and ${\mathcal{A}}^{c}$ given by cutting the correlated pairs between $\mathcal{A}$ and ${\mathcal{A}}^{c}$ only depends on the pairs of particles in the vicinity of the boundary. Although there are many numerical and theoretical results support this intuitive argument mainly in (1 + 1) system and in some (2 + 1)D systems, rigorously proving the entanglement area law is extremely challenging and many sophisticated mathematical tools, like Toeplitz matrix theory [14, 15], Fisher–Hartwig theorem [16], Lieb–Robinson bound [17], Chebyshev polynomial [18] and so on, must be used. It is now one of the central problems in Hamiltonian complexity theory to establish the entanglement area law.

On the other hand, given a real quantum many-body system, we will be facing an extremely large (about 1023 or more) degrees of freedom, which makes it a notoriously difficult task to solve the Schrödinger equations directly. However, fortunately, physical systems often have a simplified internal structure, for which we can use exponentially fewer parameters to characterize the ground states and time evolutions of the system, this makes many numerical and theoretical methods possible. The traditional mean-field approach can solve the equations for many weakly correlated systems. For strongly correlated quantum systems, many new tools are developed these years. Quantum Monte Carlo sampling [19] provides a high-accuracy method for studying large systems, however, it suffers from the sign problem which makes it unable to be applied to frustrated spin systems and interacting fermion systems. Tensor network representation of quantum states, such as density-matrix renormalization group (DMRG) and matrix product states [20], projected entangled pair states (PEPS) [21], folding algorithm [22], entanglement renormalization [23], time-evolving block decimation (TEBD) [24], etc., play an important role in calculating 1d and 2d quantum systems and even in the construction of AdS/CFT correspondence [25, 26]. Among all of these numerical and theoretical methods to represent and approximate quantum states, the neural network as an important tool of machine learning, which shows great power in approximating given functions and extracting features from a big set of data, is now attracting many interests from both physicists and computer scientists.

Neural networks are recently introduced as a new representation of quantum many-body states [27], and it shows great potential in solving some traditionally difficult quantum problems, for instance, solving some physical models and studying the time evolution of these systems [27, 28], representing toric code states [29], graph states [30], stabilizer code states [31, 32] and topologically ordered states [29, 31, 33, 34], studying quantum tomography [35, 36], and so on. Quantum neural network states are currently subject to intense research and represents a new direction for efficiently calculating ground states and unitary evolutions of many-body quantum systems. These researches stimulate an explosion of results to apply machine learning methods to investigate condensed matter physics, like distinguishing phases [37], quantum control [38], error-correcting of topological codes [39], etc. The interplay between machine learning and quantum physics has given birth to a new discipline, now known as quantum machine learning.

In this work, we present a study of the entanglement properties of the quantum neural network state. It has been shown that locally connected restricted Boltzmann states obey the entanglement area law [28]. Here we give a more comprehensive study of the entanglement properties of both shallow and deep neural network states. And as an application, we apply the notion of entanglement entropy to the understanding of the representational powers of neural networks in image classification problems.

The paper is organized as follows. In section 2, we introduce the notion of local quasi-product state and establish the entanglement area law of these states. Since locally connected neural network states are special cases of the local quasi-product states, they also obey the entanglement area law. Section 3 presents the study of the deep Boltzmann machine (DBM) states, by introducing the geometry of deep Boltzmann machine, we prove that local DBM states obey the entanglement area law. In section 4, we apply the notion of Rényi entropy to the understanding of the power of the neural network in solving image classification problem, and we show that the target function of classification problem of local smooth images obeys the entanglement area law. Finally, we discuss in the last section some subtle issues of the locality and entanglement of the neural network states.

2. Area-law entanglement of local quasi-product states and its applications to shallow neural network states

2.1. Notion of quasi-product states

The Schrödinger equation of condensed matter system usually involves a large number of degrees of freedom which makes it extremely difficult to be solved exactly. However, the eigenstates of the Hamiltonians of these natural systems often have an internal simplified structure, which makes many approximating or even exact methods possible. Neural network states are introduced as ansatz states of many-body quantum systems recently, and because of their good performance in solving some problems which can not be solved using the state-of-the-art method, many attentions are attracted [27, 28, 30, 40]. Here, to explore the area-law entanglement of the neural network states, we first introduce the concept of quasi-product states. As we will show later, the locality constraint imposed on the neural network architecture results in states of quasi-product form.

Let $\mathcal{S}=\left\{{s}_{1},\dots ,\;{s}_{N}\right\}$ be a system with N particles, by a local K-cluster cover we mean a class of local subsets of $\mathcal{S}$, viz., ${\mathcal{C}}_{1},\dots ,\;{\mathcal{C}}_{M}$, called local cluster, for which each ${\mathcal{C}}_{i}$ only contain at most K particles in a local region and ${\cup }_{i=1}^{M}{\mathcal{C}}_{i}=\mathcal{S}$. A local K-cluster quasi-product state can then be defined as ${\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)={{\Phi}}_{1}\left({\mathcal{C}}_{1}\right){\times}\cdots {\times}{{\Phi}}_{M}\left({\mathcal{C}}_{M}\right)$, where each cluster term ${{\Phi}}_{i}\left({\mathcal{C}}_{i}\right)$ is a function of degrees of freedom of particles contained in ${\mathcal{C}}_{i}$, and the size of clusters $K{:=}\mathrm{max}\left\{\vert {\mathcal{C}}_{i}\vert \right\}$ does not depend on the system size N. It is obvious that product state is just one-local quasi-product state, i.e., each ${{\Phi}}_{i}\left({\mathcal{C}}_{i}\right)$ is just the function Φi(si), since each local cluster ${\mathcal{C}}_{i}$ only contains one particle si, we will also refer this kind of states as local one-cluster quasi-product state.

It turns out that many crucial classes of quantum states can be expressed as local quasi-product states, such as cluster state, ${\mathbb{Z}}_{2}$-toric code states, graph states, ${\mathbb{Z}}_{2}$-stabilizer code states, Kitaev's $D\left({\mathbb{Z}}_{d}\right)$ quantum double ground states. They are all explicitly constructed in local RBM form [2932], but we will show later in this section that all local RBM states are the local quasi-product state.

Many of examples of local gapped systems come from local commutative Hamiltonian H = ∑kHk for which [Hk, Hl] = 0, ∀k, l and each local term Hk only acts on a local region ${\mathcal{S}}_{k}$ of the system7. It is very natural to use the quasi-product state as an ansatz state to solve the eigenvalue equation HΨ(s1, ..., sN) = E0Ψ(s1, ..., sN). We can assign a cluster ${\mathcal{C}}_{k}$ to each local term Hk and usually we also make constraint that ${\mathcal{S}}_{k}\subseteq {\mathcal{C}}_{k}$, i.e., ${\mathcal{C}}_{k}$ contain all particles which Hk acts on nontrivially. In this way, the eigenvalue equation can be simplified as

where ${E}_{0}^{\left(k\right)}$ is the ground state energy of Hk. Then using some other properties, like symmetry, of the system, we can alternatively solve these equations with less variables to give the solution of the original eigenvalue problem. Here, for illustration, we choose cluster stabilizer code, toric code and graph state as examples.

Example 1. Cluster stabilizer code state, or equivalently, the ground state of (1 + 1)D symmetry protected topological (SPT) phase Hamiltonian

defined on a 1d lattice with periodic boundary condition can be represented by a three-local quasi-product state. Each term ${\sigma }_{k-1}^{z}{\sigma }_{k}^{x}{\sigma }_{k+1}^{z}$ is called a stabilizer. The cluster state is a ${\mathbb{Z}}_{2}{\times}{\mathbb{Z}}_{2}$ protected topological state [41], which can used for measurement-based quantum computation [4244]. Here, we validate the efficiency of the local quasi-product state representation by explicit constructions, the local cluster is chosen as three-local cluster corresponding to each stabilizer, i.e., ${{\Phi}}_{k}\left({\mathcal{C}}_{k}\right)={{\Phi}}_{k}\left({s}_{k-1},{s}_{k},{s}_{k+1}\right)$. The ground state satisfies ${\sigma }_{k-1}^{z}{\sigma }_{k}^{x}{\sigma }_{k+1}^{z}{\sum }_{{s}_{1},\dots ,\;{s}_{N}={\pm}1}{\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)\vert {s}_{1},\dots ,\;{s}_{N}\rangle ={\sum }_{{s}_{1},\dots ,\;{s}_{N}={\pm}1}{\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)\vert {s}_{1},\dots ,\;{s}_{N}\rangle $ for all k, which is equivalent to

Equation (1)

Using the three-local quasi-product state ansatz state ${\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)={\prod }_{k=1}^{N}{{\Phi}}_{k}\left({s}_{k-1},\;{s}_{k},\;{s}_{k+1}\right)$ and canceling the same term from two sides of the equality, we obtain

These are highly nonlinear equations, thus are very difficult to solve directly and the solution is not unique in general. But noticing that the model is translationally invariant, we can assume that all local clusters ${{\Phi}}_{k}\left({\mathcal{C}}_{k}\right)$ are of the same form. Via this simplification, we can obtain a solution:

It is easily verified that the local quasi-product state ${\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)={\prod }_{k=1}^{N}{{\Phi}}_{k}\left({s}_{k-1},{s}_{k},{s}_{k+1}\right)$ satisfies the equation (1).

Example 2. Let us now consider the toric code model (${\mathbb{Z}}_{2}$-Kitaev quantum double model) [45], which is the simplest model of topologically ordered states and plays an important role in quantum error correcting codes and fault-tolerant quantum computation theories [46]. Given an L × L square lattice with periodic boundary condition (i.e., on a 2d torus ${\mathbb{T}}^{2}$), on each edge there is an associated spin space ${\mathbb{C}}^{2}$, thus there will be N = 2L × L qubits in total. To each vertex v and plaquette we assign a stabilizer operator ${A}_{v}={\prod }_{j\in \partial v}{\sigma }_{j}^{x}$ and ${B}_{p}={\prod }_{j\in \partial p}{\sigma }_{j}^{z}$ respectively, and the Hamiltonian is of the form

The ground state of the Hamiltonian is four-fold degenerate (which corresponds to the order of the first ${\mathbb{Z}}_{2}$ homology group of torus ${\mathbb{T}}^{2}$, viz., $\text{GSD}=\vert {H}_{1}\left({\mathbb{T}}^{2},{\mathbb{Z}}_{2}\right)\vert $). Let us briefly recall how to calculate the ground state of the toric code model. Consider the constraints imposed by plaquette operators, i.e., Bp|Ω⟩ = |Ω⟩. In the σz basis {|0⟩, |1⟩}, the set of spin configurations is then {|00⋯0⟩, |00⋯1⟩, ..., |11⋯1⟩}. Assume that

then for ${B}_{p}={\sigma }_{{p}_{1}}^{z}{\sigma }_{{p}_{2}}^{z}{\sigma }_{{p}_{3}}^{z}{\sigma }_{{p}_{4}}^{z}$, we have ${B}_{p}\vert \mathbf{s}\rangle ={\left(-1\right)}^{{s}_{{p}_{1}}+{s}_{{p}_{2}}+{s}_{{p}_{3}}+{s}_{{p}_{4}}}\vert \mathbf{s}\rangle $, where $\vert \mathbf{s}\rangle =\vert {s}_{{p}_{1}}{s}_{{p}_{2}}{s}_{{p}_{3}}{s}_{{p}_{4}}\rangle \otimes \vert \cdots \rangle $ and the addition is modulo two. Therefore

If ${s}_{{p}_{1}}+{s}_{{p}_{2}}+{s}_{{p}_{3}}+{s}_{{p}_{4}}=1$ the corresponding coefficient cs must be zero, thus the spin configurations which do not vanish is the one only even number of |1⟩ is placed on the edges of each plaquette, which means that the |1⟩ spins form a close loop in the dual lattice (see figure 1):

Figure 1.

Figure 1. The loop spin configurations in dual lattice which is represented as the red dash lines, the unlabeled edges are all set to be in |0⟩. The vertex operator can transform one loop into the other, e.g., Av|loop1⟩ = |loop2⟩ and Av'Av|loop1⟩ = Av'|loop2⟩ = |loop3⟩. For the torus case, longitudinal loop (red one) and latitudinal loop (green one) are essentially different kinds of loops, they can not be deformed into each other by vertex operators.

Standard image High-resolution image

As shown in figure 1, the effect of vertex operators is just to deform one loop into another. We call two loops equivalent if they can be linked by some vertex operators, the GSD is just the number of equivalent classes of loops. There exist essentially four different kinds of loops like longitudinal loop and latitudinal loop shown in figure 1, they are actually the bases (logical states) of the ground state space of toric code model:

Here, by explicit construction, we show that the ground state of toric code model can be represented as four-local quasi-product state, the philosophy is very similar as what we have done above. Actually, we can assign a cluster to each vertex and plaquette, thus the state is of the form ${\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)={\prod }_{v}{{\Phi}}_{v}\left({\mathcal{C}}_{v}\right){\prod }_{p}{{\Phi}}_{p}\left({\mathcal{C}}_{p}\right)$, where ${\mathcal{C}}_{v}$ (resp. ${\mathcal{C}}_{p}$) only contain spins which Av (resp. Bp) acts nontrivially. Like in the cluster state construction, we have the constraints

This can be transformed into a set of equations only involves local clusters around vertex v and plaquette p, as we have show exlicitly in references [31, 32] for general stabilizer code. A solution in the RBM form in provided in [29], it is easily checked that the corresponding local clusters

form a ground state of the toric code model. The excited state can also be represented in quasi-product form in a similar way. We must stress here that this is just one of the solutions, in fact there are a lot of other solutions depending on the choice of the local clusters.

Example 3. Another example we will consider here is the graph state, which is an important class of multipartite entangled quantum states and is useful for quantum error correcting codes, measurement-only quantum computation and so on [47]. For a given graph G with vertex set V(G) = {1, ..., N} and edge set E(G) ⊂ V(G) × V(G), the graph state is defined as

where Uij is a two-qubit controlled-Z gate. The wave function thus takes the form

Equation (2)

To represent the state using the local quasi-product state, we can assign a cluster to each edge e = ⟨ieje⟩ ∈ E(G), i.e., ${{\Psi}}_{G}\left({s}_{1},\dots ,\;{s}_{N}\right)={\prod }_{e\in E\left(G\right)}{{\Phi}}_{e}\left({s}_{{i}_{e}},{s}_{{j}_{e}}\right)$. From equation (2) we see that ${{\Phi}}_{e}\left({s}_{{i}_{e}},{s}_{{j}_{e}}\right)=\frac{{\left(-1\right)}^{{s}_{{i}_{e}}{s}_{{j}_{e}}}}{\sqrt{2}}$, which is obviously a two-local quasi-product state.

2.2. Shallow neural network states

Here we will construct two important classes of quasi-product states via feed-forward and stochastic recurrent neural networks, which will be the main focus of this work. To this end, we first need to introduce the notion of geometry for neural networks.

2.2.1. The geometry of neural network states

Inspired by the geometry of tensor network states [48], here we introduce the notion of the geometry of the neural network states, which turns out to be crucial for understanding entanglement features. Hereinafter, we will concentrate on the neural networks with a layered structure, which are also the most studied cases. The physical degrees of freedom are placed on some fixed layer of neural networks, e.g., the visible layer of restricted Boltzmann machine or input layer of the feed-forward neural network, the layer will be referred to as physical layer. Notice that the physical layer has its geometry given by the physical system. For example, if the physical degrees of freedom (like spins) are placed on the square lattice, we can impose the neurons (represent physical degrees of freedom) to have the same square lattice geometry. After the geometry of the physical layer is fixed by the geometry of the physical system, all other layers are imposed to have the same geometry duplicated from the physical layer as shown in figure 2.

Figure 2.

Figure 2. The depiction of the K-local neural networks, where the geometries of each layer are the same and all are duplicated from the geometry of the physical layer. For each neuron h we can define the ɛ-neighborhood B(h; ɛ), h can only connect neurons lie in B(h; ɛ) in the previous and the next. Under this local constraint, the maximum number K in the given ɛ-connected neural network is defined as the maximum of the number of neurons that can be connected. Here is an example of three-local neural network architecture.

Standard image High-resolution image

Recall that the geometry of tensor network states is characterized by the positions of local tensors and their contraction pattern, here for neural network states, similar results hold. We can compare the distance between neurons in different layers since these layers have the same geometry. Now we can define the notion of locality of a neural network. For a neuron ${h}_{i}^{\left(l\right)}$ in a given, say lth, layer, it is called local ɛ-connected with other neurons if it only connects to the neurons in (l − 1)th and (l + 1)th layers in the ɛ-neighborhood of hi (see figure 2 for illustration). If all the neurons of a neural network are local ɛ-connected with each other, we say the neural network is a local ɛ-connected neural network. Similar construction has been used in references [29, 31, 32, 49] for exactly constructing neural network states of some physical systems. When the in a local ɛ-connected neural network, each neuron ${h}_{i}^{\left(l\right)}$ only connects with K neurons both in (l − 1)th and (l + 1)th layers, we call it a K-local neural network. For a K-local neural network, a corresponding quantum state can be given. Usually, there are two different ways to build quantum neural network states [40], the first approach, which is also the approach we choose to use in this work, is to introduce complex weights and biases into the neural network; the second approach is to represent the amplitude and phase of a wavefunction separately. We will prove that quantum states build from K-local neural networks obey the entanglement area-law, since there are all quasi-product states, and the entanglement area law of quasi-product states will be established later. To make this construction more clear, let us see two important examples.

2.2.2. Local restricted Boltzmann machine states

In this part, we will introduce the notion of restricted Boltzmann machine states, which were introduced in reference [27] for calculating ground state and unitary evolution of strongly correlated many-body systems. The RBM was invented by Smolensky [50], it is an energy-based neural network model [51, 52]. Since RBM only has two layers of neurons, one visible layer and one hidden layer, it can be regarded as a shallow neural network.

We now build quantum states from local RBM and show that they are local quasi-product states. The RBMs have a layered structure, which makes the locality defined above applicable. To construct a local RBM state, we first impose the locality constraints on the visible layer, which is nothing but the physical layer, each visible neuron corresponds to the physical degrees of freedom (e.g. spin), denoted as $\mathcal{S}=\left\{{v}_{1},\dots ,\;{v}_{n}\right\}$, and the geometry of the visible layer is inherited from the physical system. The hidden neurons are denoted as {h1, ..., hm}, which are placed on the hidden layer with geometry duplicated from visible layer, viz, the distance between two neurons can be defined the same as visiblelayer (the distance of visible layer is inherited from the physical system). The weight between hj and vi is denoted as Wij, thebiases of vi and hj are ai and bj respectively. The RBM representation of quantum states is obtained by tracing out all hidden neurons, viz,

Equation (3)

where ${{\Gamma}}_{j}\left({v}_{i}:\langle {v}_{i}{h}_{j}\rangle \right)=2\mathrm{cosh}\left({b}_{j}+{\sum }_{i:\langle ij\rangle }{v}_{i}{W}_{ij}\right)$ if hj = ±1 and ${{\Gamma}}_{j}\left({v}_{i}:\langle {v}_{i}{h}_{j}\rangle \right)=1+\mathrm{exp}\left({b}_{j}+{\sum }_{i:\langle ij\rangle }{v}_{i}{W}_{ij}\right)$ if hj = 0, 1, and by notation ⟨ij⟩ we mean hj and vi are connected. The K-local RBM state can be defined as the one whose hidden neurons hj only connect with at most K visible neurons ${v}_{{j}_{k}}$ in a local ɛ-neighborhood of hj. From the construction it is easily checked that the ΨRBM(v1, ..., vn) = jΦj(vi : ⟨vihj⟩). This kind of construction has be used in references [29, 31, 32, 49] for investigating physical properties of complex systems.

Let us now see an one-dimensional example. As shown in figure 3(a), the quantum states built from three-local RBM neural network is a local quasi-product state. From equation (3), it is easily checked that Ψ(v1, ..., v9) = Φ1(v1, v2) × Φ2(v1, v2, v3) ×× Φ9(v8, v9), thus it is a three-local quasi-product state.

Figure 3.

Figure 3. (a) Example of one-dimensional three-local RBM state; (b) example of one-dimensional three-local feed-forward neural network state.

Standard image High-resolution image

2.2.3. Local feed-forward neural network states

Another crucial class of quasi-product states is local feed-forward neural network states. To start with, let us first briefly recall the notion of a feed-forward neural network. The neuron of the feed-forward neural network is modeled by McCulloch–Pitts neuron model [53], n inputs x1, x2, ..., xn values are transmitted by n corresponding weighted connections with weights w1, w2, ..., wn. After the input values have reached the neuron, they are added together with weights ${\sum }_{i=1}^{n}{w}_{i}{x}_{i}$ and the result is then compared with the bias b of the neuron to determine if it is activated or deactivated. The activation status is characterized by the activation function F. Therefore the output of the neuron is given by $y=F\left({\sum }_{i=1}^{n}{w}_{i}{x}_{i}-b\right)$. There are several commonly used activation functions such as step function, sigmoid function and so on. Here, to make the construction more general, we will not restrict the form of activation function and we allow the activation function of each neuron to be different in one neural network. A feed-forward neural network is several layers of neurons for which the neurons in adjacent layers are connected with each other but there is no intra-layer connection, as shown in figure 3(b).

To build quantum states from feed-forward neural network, complex weights, biases and complex activation functions need to be introduced8. We assume the output value of the output layer if y1 = F1(v1, ..., vn), ..., ym = F(v1, ..., vn), the quantum states is construct as their product, ${\Psi}\left({v}_{1},\dots ,\;{v}_{n}\right)={\prod }_{j=1}^{m}{F}_{j}\left({v}_{1},\dots ,\;{v}_{n}\right)$ where the normalization factor is omitted. If we add the locality constraints of connections between each layer, then we get the local feed-forward neural network states. See figure 3(b) for an example. The local constraints make the corresponding states quasi-product states. As in figure 3(b), the value of the first output neuron Φ1(v1, v2, v3) = F[G1(v1, v2), G2(v2, v3)] only depends on particles v1, v2, v3, the corresponding quantum states is of the form Ψ(v1, ..., v9) = Φ1(v1, v2, v3) ×× Φ7(v7, v8, v9) which is obviously a quasi-product state. It worth mentioning that the number of layers of network should not be too large, otherwise the size of local cluster $\vert \mathcal{C}\vert $ of the corresponding states will be comparable with the system size N, which will break the locality constraint.

2.3. Entanglement area law of local quasi-product states

Entanglement entropy is a crucial theoretical tool for investigating quantum many-body systems. We now establish the entanglement area law of local quasi-product states, since local RBM states and local feed-forward neural network states are special cases of local quasi-product states, they all obey the entanglement area law. We prove that in arbitrary spatial dimensions the local quasi-product states obey the entanglement area law for arbitrary connected bipartition of the system. More precisely, we have the following theorem:

Theorem 1. For an N-particle system $\mathcal{S}$, suppose that

is a K-local quasi-product quantum states, then the Rényi entropies of the reduced density matrix ${\rho }_{\mathcal{A}}={\mathrm{Tr}}_{{\mathcal{A}}^{c}}\vert {\Psi}\rangle \langle {\Psi}\vert $ with respect to the bipartition $\mathcal{S}=\mathcal{A}\bigsqcup {\mathcal{A}}^{c}$ satisfies the following area law

Equation (4)

where $\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)$ denotes the number of particles on the boundary of $\mathcal{A}$ and ζ(K) is a scaling factor only depends on the size of local cluster K.

Proof. We first define three kinds of local clusters: (i) the clusters which only contain particles in $\mathcal{A}$ (as ${\mathcal{C}}_{\mathrm{int}}$ in figure 4), called internal clusters; (ii) the clusters which only contain particles in ${\mathcal{A}}^{c}$ (as ${\mathcal{C}}_{\text{ext}}$ in figure 4), called external clusters; and (iii) the clusters which contain particles both in $\mathcal{A}$ and ${\mathcal{A}}^{c}$ (as ${\mathcal{C}}_{bd}$ in figure 4), called boundary clusters. We will denote the set of particles contained in boundary clusters as $\mathcal{B}=\partial \mathcal{A}\cup \partial {\mathcal{A}}^{c}$, where $\partial \mathcal{A}=\mathcal{A}\cap \left({\cup }_{i}{\mathcal{C}}_{bd}^{\left(i\right)}\right)$, ${\mathcal{C}}_{bd}^{\left(i\right)}$ are all boundary clusters, and similarly for $\partial {\mathcal{A}}^{c}$. The interior of $\mathcal{A}$, denoted as $\mathrm{I}\mathrm{n}\mathrm{t}\mathcal{A}$, is defined as $\mathrm{I}\mathrm{n}\mathrm{t}\mathcal{A}=\mathcal{A}{\backslash}\partial \mathcal{A}$; the exterior of $\mathcal{A}$, denoted as $\mathrm{E}\mathrm{x}\mathrm{t}\mathcal{A}$, is defined as $\mathrm{E}\mathrm{x}\mathrm{t}\mathcal{A}={\mathcal{A}}^{c}{\backslash}\partial {\mathcal{A}}^{c}$.

Since ${\Psi}\left({s}_{1},\dots ,\;{s}_{N}\right)={\prod }_{i=1}^{M}{{\Phi}}_{i}\left({\mathcal{C}}_{i}\right)$ are of quasi-product form, then using the locality feature of each local cluster, we have

Equation (5)

where $\vert {{\Phi}}_{L}\left(\partial \mathcal{A}\right)\rangle ={\sum }_{{s}_{n}\in \mathrm{I}\mathrm{n}\mathrm{t}\mathcal{A}}{\prod }_{{\mathcal{C}}_{\mathrm{int}}^{\left(j\right)}\subset \mathcal{A}}{\Phi}\left({\mathcal{C}}_{\mathrm{int}}^{\left(j\right)}\right)\vert \mathcal{A}\rangle $ and $\vert {{\Phi}}_{R}\left(\partial {\mathcal{A}}^{c}\right)\rangle ={\sum }_{{s}_{l}\in \mathrm{E}\mathrm{x}\mathrm{t}\mathcal{A}}{\prod }_{{\mathcal{C}}_{\text{ext}}^{\left(j\right)}\subset {\mathcal{A}}^{c}}{\Phi}\left({\mathcal{C}}_{\text{ext}}^{\left(k\right)}\right)\vert {\mathcal{A}}^{c}\rangle $ , states $\vert {{\Phi}}_{L}\left(\partial \mathcal{A}\right)\rangle $ are labeled by particles contained in $\partial \mathcal{A}$ and states $\vert {{\Phi}}_{R}\left(\partial {\mathcal{A}}^{c}\right)\rangle $ are labeled by the particles contained in $\partial {\mathcal{A}}^{c}$. If each local system si can take p values, there are at most ${p}^{\vert \mathcal{B}\vert }$ terms contained in summation of equation (5). We stress here that $\vert \mathcal{B}\vert $ only depends on the area $\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\mathcal{A}$ and cluster size K, more precisely, $\vert \mathcal{B}\vert {\leqslant}R{\times}\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\mathcal{A}$. Therefore after tracing out the ${\mathcal{A}}^{c}$ part, we get ${\rho }_{\mathcal{A}}$ with rank at most ${p}^{\vert \mathcal{B}\vert }$, thus the Rényi entropy ${S}_{\alpha }\left(\mathcal{A}\right)$ is upper bounded by $\zeta \left(K\right)\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\mathcal{A}$.□

Figure 4.

Figure 4. The depiction of internal cluster ${\mathcal{C}}_{\mathrm{int}}$, external cluster ${\mathcal{C}}_{\text{ext}}$ and boundary cluster ${\mathcal{C}}_{bd}$ for a bipartition $\mathcal{A}$ and ${\mathcal{A}}^{c}$ of the system.

Standard image High-resolution image

For a physical system with a fixed background space-time, particles only interact with their neighborhoods, the strength of the interaction usually decays to zero if particles are sufficiently far apart. Thus for any many-body physical system, there is a preexisted geometry which characterizes the distance of two particles. However, when we write down a many-body quantum state, this part of geometrical information is usually erased in some sense. For a multipartite quantum system $\mathcal{S}$, we have a state $\vert {{\Psi}}_{\mathcal{S}}\rangle $ which encodes the full information of the system. To understand entanglement feature of $\vert {{\Psi}}_{\mathcal{S}}\rangle $, we first divide $\mathcal{S}$ into two parts $\mathcal{A}$ and ${\mathcal{A}}^{c}$, note that Rényi entropy ${S}_{\alpha }\left(\mathcal{A}\right)$ of the reduced density matrix ${\rho }_{\mathcal{A}}$ quantifies the entanglement between $\mathcal{A}$ and ${\mathcal{A}}^{c}$, then we can define entanglement feature of $\vert {{\Psi}}_{\mathcal{S}}\rangle $ as the set of Rényi entropies ${S}_{\alpha }\left(\mathcal{A}\right)$ over all entanglement subsystem $\mathcal{A}$. The entanglement feature of the system usually contains the information of the geometry of the system, namely, using the entanglement-geometry correspondence (duality), we can recover the geometry information from entanglement features.

Since the paradigm of the neural network is to adjust the connection weights, when weights are zero we say there is no connection between two neurons. It is then natural that the geometry of the neural network reflects in the connectivity of the network. In the spirit of entanglement-geometry correspondence, the entanglement is then encoded in the connectivity of neural networks of the state. This is consistent with the intuition we get from tensor network states, for which we can easily read out entanglement properties from the geometry of the tensor network. Since locally connected neural network states are all local quasi-product states, thus they all obey the entanglement area law. The locality of the states is encoded in the connection pattern of the neural network, which agrees well with our intuition. This kind of construction will also be useful for understanding the entanglement-geometry correspondence [49], such as Ryu–Takayanagi formula in a discrete form [11, 25, 26].

Another issue worth mentioning is the topological entanglement entropy [54], which is a sub-leading term of $S\left(\mathcal{A}\right)$. More precisely, for a gapped system, the entanglement entropy is expected to take the form $S\left(\mathcal{A}\right)=\zeta \mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\mathcal{A}-\gamma +\mathcal{O}\left(\vert \mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\mathcal{A}{\vert }^{\beta }\right)$, here ζ, γ, β ⩾ 0, the first term is the area law contribution and γ is called topological entanglement entropy. Topological entanglement entropy Stop = γ is a universal quantum number which can be used to detect if a system is topologically ordered or equivalently, if the state is long-range entangled. The word universal means that for the states with the same topological order, the topological entanglement entropy is the same. The local quasi-product state is a good ansatz state for gapped many-body system, thus it can also be used to calculate topological entanglement entropy. The procedure is as follows, for a local gapped quantum system H = kHk, take the ansatz state as ${\Psi}={{\Pi}}_{k}{{\Phi}}_{k}\left({\mathcal{C}}_{k},{{\Omega}}_{k}\right)$, here ${\mathcal{C}}_{k}$ is the local cluster corresponding to Hk which has the support (spins contained in ${\mathcal{C}}_{k}$) equal or lager than the support (spins which the operator acts nontivially) of Hk and Ωk is the variational parameters of the local cluster state Φk which, for example, can be chosen as the weights and biases for the kth portion of local neural network of the state. Using the variational method, we can find the value of parameters which minimize the energy functional and thus obtain the state. Then taking a large disc region $\mathcal{D}$ with smooth boundary, we can divide it as three fan-shaped subregions $\mathcal{A},\mathcal{B},\mathcal{C}$, the topological entanglement entropy can be obtained from

As what we have shown in example 2, the toric code state can be represented exactly as local quasi-product state in the most economic choice of local cluster, thus Stop of toric code model can be obtained exactly. In general, to improve the accuracy of the calculation, the lager local cluster ought to be chosen.

3. Entanglement features of deep neural network states

In this part, we investigate the entanglement properties of the deep neural network state, we will take the deep Boltzmann machine (DBM) as an example. Although many progress of RBM states has been made, the DBM states are less investigated [30]. There are several crucial reasons why we need deep neural network rather than shallow one: (i) the representational power of shallow network is limited, there exist states which can be efficiently represented by deep neural network while the shallow one can not represent [30]; (ii) aAny Boltzmann machine (BM) can be reduced into a DBM, this also makes some limits in usage of shallow BM (with just one hidden layer, viz, RBM) [30]; (iii) the hierarchy structure of deep neural is more suitable for encoding holography [49, 55, 56] and for procedure such as renormalization [57].

Now let us take a close look at the geometry of a DBM neural network. Since we can reduce a DBM with M hidden layers into a DBM with only two hidden layers by folding trick [30] (see figure 5), it is sufficient to consider the deep neural network with only two hidden layers. The procedure is the same as what has been done for shallow neural networks. The visible layer consists of physical variables (visible neurons), thus the geometry is given by the fixed background geometry (e.g., the lattice structure of the system). The geometry of the shallow and deep hidden layers are just duplicated from the visible lay geometry. Then we can define the distance between neurons not only in the same layer but also in different layers. For a given neuron h, the ɛ-neighborhood B(h; ɛ) is defined as the the disk region centered at h and with radius ɛ. An ɛ-local neural network is the one where each neuron only connects neurons in their ɛ-neighborhood, the maximum connecting number of each neuron is K, we call the network a local K-connecting (or K-local) DBM, see figure 2 for an illustration.

Figure 5.

Figure 5. Illustration of the procedure for transforming an arbitrary Boltzmann machine into a deep Boltzmann machine with only two hidden layers.

Standard image High-resolution image

3.1. Entanglement area law

Here we show that the geometrical information, locality, of the deep neural network, also results in the area law of the entanglement entropy.

Theorem 2. For any K-local DBM state |Ψ⟩, Rényi entropy of the reduced density matrix ${\rho }_{\mathcal{A}}={\mathrm{Tr}}_{{\mathcal{A}}^{c}}\left(\vert {\Psi}\rangle \langle {\Psi}\vert \right)$ satisfy the following area law

Equation (6)

where $\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)$ denotes the number of particles on the boundary of $\mathcal{A}$ and ζ(K) is a scaling factor only depends on local connection number K.

Proof. Inspired by the work of Deng et al [28], here we establish the area law by explicit construction. For K-local DBM states, we can group connections into several sets using the hidden neurons in the first hidden layer. Note that for DBM states, the coefficients are ${\Psi}\left(\mathbf{v}\right)={\sum }_{\mathbf{h}}{\sum }_{\mathbf{g}}\mathrm{exp}\left\{{\sum }_{i}{v}_{i}{a}_{i}+{\sum }_{k}{c}_{k}{g}_{k}+{\sum }_{j}{h}_{j}\left({b}_{j}+{\sum }_{i;\langle ij\rangle }{W}_{ij}{v}_{i}+{\sum }_{k;\langle kj\rangle }{W}_{kj}{g}_{k}\right)\right\}$ where sum is over hidden neurons h = (h1, ..., hm) of first hidden layer and g = (g1, ..., gl) of second hidden layer and all hidden neurons are assumed to take values 0, 1 here (of course, for the case of taking values ±1, the result also holds). Then, like what have usually been done for area-law tensor network states, we need to factorize the coefficients into a partial product form ${\Psi}\left(\mathbf{v}\right)={\prod }_{i=1}^{n}{\mathrm{e}}^{{v}_{i}{a}_{i}}{\sum }_{\mathbf{g}}\left({\prod }_{k=1}^{l}{\mathrm{e}}^{{g}_{k}{c}_{k}}{\prod }_{j=1}^{m}{{\Phi}}_{j}\right)$ in which ${{\Phi}}_{j}={{\Phi}}_{j}\left({v}_{{j}_{1}},\dots ,\;{v}_{{j}_{K}};{g}_{{j}_{1}},\dots ,\;{g}_{{j}_{K}}\right)={\sum }_{{h}_{j}=0,1}\mathrm{exp}\left\{{h}_{j}\left({b}_{j}+{\sum }_{i;\langle ij\rangle }{W}_{ij}{v}_{i}+{\sum }_{k;\langle kj\rangle }{W}_{kj}{g}_{k}\right)\right\}$, ${v}_{{j}_{1}},\dots ,\;{v}_{{j}_{K}}$ are (at most) K visible neurons connected to hj, and ${g}_{{j}_{1}},\dots ,\;{g}_{{j}_{K}}$ (at most) K hidden neurons connected to hj in deep hidden layer (where we have used the assumption of K-locality). Now using an important trick of reference [28], the visible layer neurons can be divided into six groups: ${\mathcal{A}}_{3}$ consists of visible neurons which connect ${\mathcal{A}}^{c}$ part by a hidden neuron, ${\mathcal{A}}_{2}$ consists of visible neurons connecting neurons of ${\mathcal{A}}_{3}$ via a hidden neuron, and ${\mathcal{A}}_{1}=\mathcal{A}{\backslash}\left({\mathcal{A}}_{3}\bigcup {\mathcal{A}}_{2}\right)$; similarly, we can define ${\mathcal{A}}_{3}^{c}$, ${\mathcal{A}}_{2}^{c}$ and ${\mathcal{A}}_{1}^{c}$. Obviously, $\mathcal{A}$ (${\mathcal{A}}^{c}$) is the disjoint union of ${\mathcal{A}}_{1}$, ${\mathcal{A}}_{2}$ and ${\mathcal{A}}_{1}$ (${\mathcal{A}}_{3}^{c}$, ${\mathcal{A}}_{2}^{c}$ and ${\mathcal{A}}_{1}^{c}$). Similar division can be applied to the deep hidden layer (since the layer has a fixed background geometry same as visible layer), we first assume that the corresponding bipartition of the layer is $\mathcal{B}\bigsqcup {\mathcal{B}}^{c}$, the deep hidden neurons are then grouped into six parts: ${\mathcal{B}}_{1}$, ${\mathcal{B}}_{2}$, ${\mathcal{B}}_{3}$, ${\mathcal{B}}_{3}^{c}$, ${\mathcal{B}}_{2}^{c}$ and ${\mathcal{B}}_{1}^{c}$.

Now, consider the state $\vert {\Psi}\rangle ={\sum }_{\mathbf{v}}{\Psi}\left(\mathbf{v}\right)\vert {\mathbf{v}}_{\mathcal{A}}\rangle \otimes \vert {\mathbf{v}}_{{\mathcal{A}}^{c}}\rangle ={\sum }_{\mathbf{v}}{\prod }_{i=1}^{n}{\mathrm{e}}^{{v}_{i}{a}_{i}}{\sum }_{\mathbf{g}}\left({\prod }_{k=1}^{l}{\mathrm{e}}^{{g}_{k}{c}_{k}}{\prod }_{j=1}^{m}{{\Phi}}_{j}\right)\vert {\mathbf{v}}_{\mathcal{A}}\rangle \otimes \vert {\mathbf{v}}_{{\mathcal{A}}^{c}}\rangle $, we denote the set of shallow hidden neurons which connect ${\mathcal{A}}_{3}$ and ${\mathcal{A}}_{3}^{c}$ (also ${\mathcal{B}}_{3}$ and ${\mathcal{B}}_{3}^{c}$) as ${\mathcal{C}}_{\text{Bd}}$, ones connect ${\mathcal{A}}_{1}$ and ${\mathcal{A}}_{2}$ (also ${\mathcal{B}}_{1}$ and ${\mathcal{B}}_{2}$) as ${\mathcal{C}}_{\text{Int}}$ and ones connect ${\mathcal{A}}_{1}^{c}$ and ${\mathcal{A}}_{2}^{c}$ (also ${\mathcal{B}}_{1}^{c}$ and ${\mathcal{B}}_{2}^{c}$) as ${\mathcal{C}}_{\text{Ext}}$. We can introduce the state $\vert {{\Psi}}_{\mathcal{A}}\rangle ={\sum }_{{\mathbf{v}}_{{\mathcal{A}}_{1}}}{\mathrm{e}}^{{\mathbf{v}}_{{\mathcal{A}}_{1}}\cdot {\mathbf{a}}_{{\mathcal{A}}_{1}}}{\sum }_{{\mathbf{g}}_{{\mathcal{B}}_{1}}}{\mathrm{e}}^{{\mathbf{g}}_{{\mathcal{B}}_{1}}\cdot {\mathbf{c}}_{{\mathcal{B}}_{1}}}{\prod }_{j\in {\mathcal{C}}_{\text{Int}}}{{\Phi}}_{j}\vert {\mathbf{v}}_{\mathcal{A}}\rangle $ and state $\vert {{\Psi}}_{{\mathcal{A}}^{c}}\rangle ={\sum }_{{\mathbf{v}}_{{\mathcal{A}}_{1}^{c}}}{\mathrm{e}}^{{\mathbf{v}}_{{\mathcal{A}}_{1}^{c}}\cdot {\mathbf{a}}_{{\mathcal{A}}_{1}^{c}}}{\sum }_{{\mathbf{g}}_{{\mathcal{B}}_{1}^{c}}}{\mathrm{e}}^{{\mathbf{g}}_{{\mathcal{B}}_{1}^{c}}\cdot {\mathbf{c}}_{{\mathcal{B}}_{1}^{c}}}{\prod }_{j\in {\mathcal{C}}_{\text{Ext}}}{{\Phi}}_{j}\vert {\mathbf{v}}_{{\mathcal{A}}^{c}}\rangle $, then the state Ψ can be decomposed as

where ${\mathcal{A}}_{\text{Bd}}={\mathcal{A}}_{2}\cup {\mathcal{A}}_{3}\cup {\mathcal{A}}_{3}^{c}\cup {\mathcal{A}}_{2}^{c}$ and ${\mathcal{B}}_{\text{Bd}}={\mathcal{B}}_{2}\cup {\mathcal{B}}_{3}\cup {\mathcal{B}}_{3}^{c}\cup {\mathcal{B}}_{2}^{c}$. Tracing out the ${\mathcal{A}}^{c}$ part, we get ${\rho }_{\mathcal{A}}$ is the weighted sum of several one-dimensional projectors (which are not necessarily orthogonal), via Gram–Schmidt orthogonalization, it is clear that the rank of ${\rho }_{\mathcal{A}}$ is upper bounded by a function f(K) only depends on K. Since the Rényi entropy of a density matrix take the maximum value only if all eigenvalues pi are equal, i.e., p1 = ⋯ = pr = 1/r ⩾ 1/f(K) with r the rank of matrix, we then complete the proof. □

Let us give some intuitive explanation about the construction. Since each visible neuron is correlated with neurons in ɛ-neighborhood, we can regard ɛ as the correlation length of the state, the correlation between $\mathcal{A}$ and ${\mathcal{A}}^{c}$ comes predominantly from visible neurons sit in 2ɛ-strip around the boundary $\partial \mathcal{A}$. Then the visible neurons deep in the region $\mathcal{A}$ can not be correlated with visible neurons deep in the region ${\mathcal{A}}^{c}$, only neurons near the boundary contribute to the Rényi entanglement entropy ${S}_{\alpha }\left(\mathcal{A}\right)$ and thus result in the area law.

3.2. Entanglement volume law

As we have proved above, the locality of the neural network reflects in the area law of entanglement features. It is natural to ask what about the neural network with nonlocal connections. We can expect the fully connected DBMs exhibit entanglement volume law [5860], actually this is the case. In contrast to tensor network for which the efficiency9 strongly depends on the validity of the entanglement area law of the state, the neural networks are still efficient in representing many-body states obeying volume law. As has been pointed out in references [29, 30], shallow neural networks are capable of some critical-system states obeying entanglement volume law. We can trivially add a deep hidden layer and give some trivial connections to make a deep neural network exhibit volume law. Despite the triviality of the construction, we want to stress some crucial point of the volume-law neural network: (i) there must be some nonlocal connections in the neural network architecture, which is the origin of the volume law entanglement; (ii) the representation is efficient, i.e., the number of hidden neurons and connections increases at most polynomially at the number of visible neurons.

The volume-law DBM states have a close relationship with the maximally multipartite entangled states [61]. The philosophy behind the construction is that we can make the particles in the smaller region $\mathcal{A}$ fully correlate with its complementary ${\mathcal{A}}^{c}$ such that all information of $\mathcal{A}$ is encoded in ${\mathcal{A}}^{c}$ in some way, then ${\rho }_{\mathcal{A}}$ is proportional to identity matrix ${\mathbb{1}}_{\text{Vol}\left(\mathcal{A}\right)}$ with order $\mathrm{V}\mathrm{o}\mathrm{l}\left(\mathcal{A}\right)$ (where $\mathrm{V}\mathrm{o}\mathrm{l}\left(\mathcal{A}\right)$ denotes the number of particles contained in region $\mathcal{A}$), which further implies that Rényi entropies satisfy the volume law. Another important issue is that the number of the neural network is closely related to the entanglement properties of the corresponding neural network states. A neural network state with more hidden layers will tend to exhibit the volume law entanglement. This can be seen from feed-forward neural network more easily, if the number of the hidden layers increases, eventually, the size of the local clusters will be comparable with the system size, this will break the entanglement area law.

4. Understanding the power of neural network using Rényi entanglement entropy

The success of neural network achieved in tasks like image classification problem suggests that to understand the power of the neural network we will need to establish a new information theory of the functions of images rather than just of the image itself. In fact the classification problem shares great mathematical similarity with the quantum spin model [62]. The images correspond to the spin configurations and the target function of the image classification problem corresponds to the wave function of the spin system. The functions f of the images with N = L × L pixels form a Hilbert space, and by normalization, these functions are in one-to-one correspondence with the wave functions of an L × L quantum spin model, thus the notion of Rényi entanglement entropy ${S}_{\alpha }\left(\mathcal{A}\right)$ of a connected subregion of image makes sense. As we will see, this entanglement entropy can be used as a measure of the difficulty for approximating a function. In general, functions obey the entanglement volume law need O(2N) parameters to approximate, while functions obey entanglement area law can be approximated using a neural network with poly(N) parameters. Here we will argue that entanglement entropy of target functions of reasonable image classification problems obey the entanglement area law.

We known from quantum spin model that to represent a general quantum state using, e.g., tensor network and neural network, O(2N) parameters are needed [5860], but with the locality constraint of the Hamiltonian, polynomially many parameters are sufficient to represent the ground state, this is characterized by the area law of entanglement entropy of these states [12].

Following the reference [62], here we first present the explicit definition of image classification problem. The case we will consider here a is two-label classification problem (a yes or no problem) of the L × L-pixel black-white images, the corresponding values of pixels are thus 1 and 0 for black and white respectively. The set of all images is denoted as $\mathcal{I}=\left\{I:\left\{1,\dots ,\;L\right\}{\times}\left\{1,\dots ,\;L\right\}\to \left\{0,1\right\}\right\}$, there are 2L×L images in $\mathcal{I}$ in total. The Hilbert space of all complex-valued functions on image set $\mathcal{I}$ is then ${\mathcal{H}}_{\mathcal{I}}=\left\{f:\mathcal{I}\to \mathbb{C}\right\}$. The classification problem is to determine if a given image $I\in \mathcal{I}$ lies in the target set $\mathcal{T}\subset \mathcal{I}$ or not. The target function is then defined as

Equation (7)

To measure the difference between general f and target function ${f}_{\mathcal{T}}$, some kind of norm ${\Vert}f\left(I\right)-{f}_{\mathcal{T}}\left(I\right){\Vert}$ is chosen, and we can construct a functional called cost function ${C}_{\mathcal{T}}\left[f\right]={\sum }_{I\in \mathcal{I}}{\Vert}f\left(I\right)-{f}_{\mathcal{T}}\left(I\right){\Vert}p\left(I\right)$ where p(I) is the probability distribution over the image set. Our aim is to minimize the cost function, i.e.,

It is extremely difficult to solve the optimization problem directly. In the neural network approach, function f is represented by a neural network (with fixed architecture) with parameter set Ω = {wij, bi}, i.e., f(I) = f(Ω, I), the cost function then becomes a function of these parameters ${C}_{\mathcal{T}}\left[f\right]={C}_{\mathcal{T}}\left({w}_{ij},{b}_{i}\right)$, algorithms like gradient descent method can then be applied to find the minimal value.

Let us see an example of the image classification problem for which an exponentially large number of parameters are needed to approximate the corresponding target function. Suppose that we have randomly generated a set of L × L-pixel images and set them as the target set $\mathcal{T}$ of images, then $\mathcal{T}$ does not have any pattern at all. It turns out that this problem can only be solved with exponentially many parameters and the target function obeys the volume law of entanglement [5860]. By contrast, problems whose target set has an intrinsic pattern can be solved using polynomially many parameters. For example, determining whether a given image only contains loops or not can be solved efficiently and the target function obeys the entanglement area law (which in fact corresponds to the toric code model in quantum spin models [45]). It is natural to ask what kind of conditions the target set of images must satisfy to make the classification problem can be solved efficiently. In reference [62], two important conditions are given: (i) for images I and I', consider two connected regions $\mathcal{A}$ and ${\mathcal{A}}^{c}$, if I and I' are the same in region in ${\mathcal{A}}^{c}$ then they must be the same on the boundary of $\mathcal{A}$; (ii) for two regions ${\mathcal{A}}_{I}$ and ${\mathcal{A}}_{{I}^{\prime }}$ of images I and I', the number NI,I' of possible images, for which ${\mathcal{A}}_{I}^{c}$ and ${\mathcal{A}}_{{I}^{\prime }}^{c}$ are the same, only depends on the B-range part of boundary of ${\mathcal{A}}_{I}$ and ${\mathcal{A}}_{{I}^{\prime }}$. These two conditions characterize the property of smoothness of the images in some sense, we will refer to this kind of problem as a locally smooth image classification problem. From the above discussion, we may come to the following result:

Theorem 3. For any target function ${f}_{\mathcal{T}}$ of locally smooth image classification problem, the Rényi entanglement entropy satisfies the area law

Equation (8)

where ζ(B) is a scaling factor depending on B linearly and not depending on the size of the images (number of pixels).

Proof. This can be proved straightforward by calculating the density matrix corresponding to target function ${f}_{\mathcal{T}}$, and by tracing the region ${\mathcal{A}}^{c}$ part, we obtain a density matrix ${\rho }_{\mathcal{T}}\left(\mathcal{A}\right)$ with rank at most ${2}^{\zeta \left(B\right){\times}\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)}$. To this end, we first introduce the density matrix ${\rho }_{\mathcal{T}}$ corresponding to target function ${f}_{\mathcal{T}}$. We can assign |0⟩ or |1⟩ to each pixel Iij of the image I when it is white Iij = 0 or black Iij = 1 respectively, in this way, we construct a 2N-dimensional Hilbert space with basis $\vert I\rangle ={\otimes }_{i,j=1}^{L}\vert {I}_{ij}\rangle $. Then for a target function ${f}_{\mathcal{T}}$ as in equation (7) we can construct a quantum state

where $\mathcal{N}$ is the normalization factor.

Now consider a connected region $\mathcal{A}$ of the images, let ${I}_{\mathcal{A}}$ and ${I}_{{\mathcal{A}}^{c}}$ denote pixels in region $\mathcal{A}$ and ${\mathcal{A}}^{c}$ respectively. Thus $\vert {\psi }_{\mathcal{T}}\rangle ={\sum }_{I\in \mathcal{T}}\vert I\rangle /\mathcal{N}={\sum }_{I\in \mathcal{T}}\vert {I}_{\mathcal{A}}\rangle \otimes \vert {I}_{{\mathcal{A}}^{c}}\rangle /\mathcal{N}$. To ultilize two conditions of the target images of locally smooth image classification problem, we further divide the region $\mathcal{A}$ into ${\partial }^{B}\mathcal{A}$ which contains pixels in B-rangle of the boundary of $\mathcal{A}$, ${\mathrm{I}\mathrm{n}\mathrm{t}}^{B}\left(\mathcal{A}\right)=\mathcal{A}{\backslash}{\partial }^{B}\mathcal{A}$. The target state can be rewrite as $\vert {\psi }_{\mathcal{T}}\rangle ={\sum }_{I\in \mathcal{T}}\vert {I}_{{\partial }^{B}\mathcal{A}}\rangle \otimes \vert {I}_{{\text{Int}}^{B}\left(\mathcal{A}\right)}\rangle \otimes \vert {I}_{{\mathcal{A}}^{c}}\rangle /\mathcal{N}$. The reduced density matrix is

Now we can divide the target image set $\mathcal{T}$ into K disjoint subsets ${\mathcal{T}}_{j},j=1,\dots ,\;K$ such that for any two images $I,{I}^{\prime }\in {\mathcal{T}}_{j}$, they share the same pixel values in regions ${\mathcal{A}}^{c}$, i.e., ${I}_{{\mathcal{A}}^{c}}={I}_{{\mathcal{A}}^{c}}^{\prime }$. Thus the reduced density matrix takes the form (the normalization factor is omitted here)

Then we need to use conditions (i) and (ii) of locally smooth image classification problem, from condition (i), we know that K is upper bounded by ${2}^{\text{Area}\left(\mathcal{A}\right)}$, since images with the same ${\mathcal{A}}^{c}$ values must also share the same $\partial \mathcal{A}$ value, the number of possible ${\mathcal{A}}^{c}$ images in target set $\mathcal{T}$ is upper bounded by the possible images of boundary of $\mathcal{A}$, viz., $K{\leqslant}{2}^{\text{Area}\left(\mathcal{A}\right)}$. From condition (ii), for a chosen ${\mathcal{T}}_{j}$, the number of possible ${\mathrm{I}\mathrm{n}\mathrm{t}}^{B}\left(\mathcal{A}\right)$ values for images in ${\mathcal{T}}_{j}$ is upper bounded by the number of possible images of the B-range boundary of $\mathcal{A}$, thus there are at most ${2}^{B\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)}$ terms in the summation

Thus, it is clear that the rank of density matrix ${\rho }_{\mathcal{T}}\left(\mathcal{A}\right)$ is upper bounded by ${2}^{\left(B+1\right)\mathrm{A}\mathrm{r}\mathrm{e}\mathrm{a}\left(\mathcal{A}\right)}$, the area law of Rényi entropy is now established and ζ(B) = B + 1.□

It is worth mentioning that the condition (ii) about the B-range boundary determines area law and B must not depend on the number of pixels of the images. Let us now see a typical example of the locally smooth image classification problem.

Example 4. (Circle determination on a torus ${\mathbb{T}}^{2}$). Consider a set of L × L-pixel images, our aim is to determine if the image is a circle or not. Here we take the periodic boundary condition for images, thus each image can be regarded as a torus image. To make the definition of the circle more clear, let us consider the cellulation of a torus ${\mathbb{T}}^{2}$, which is chosen as a square lattice on the torus, thus there are three kinds of cells here, faces $f\in F\left({\mathbb{T}}^{2}\right)$, edges $e\in E\left({\mathbb{T}}^{2}\right)$ and vertices $v\in V\left({\mathbb{T}}^{2}\right)$. The pixels are put on the edges of the lattice, viz., we assign 0, 1 to each edge (see figure 1).

In homology-theoretical language, a torus image is actually a ${\mathbb{Z}}_{2}$ one-chain, which is defined as $I:E\left({\mathbb{T}}^{2}\right)\to \left\{0,1\right\}$. Likewise, we can defined the zero-chain $c:V\left({\mathbb{T}}^{2}\right)\to \left\{0,1\right\}$, a trivial zero-chain is the one for which all vertices are set as 0. The set of all one-chain and zero-chain is denoted as ${C}^{1}\left({\mathbb{T}}^{2}\right)$ and ${C}^{0}\left({\mathbb{T}}^{2}\right)$ respectively. Consider a one-chain, we can define a boundary map $\partial :{C}^{1}\left({\mathbb{T}}^{2}\right)\to {C}^{0}\left({\mathbb{T}}^{2}\right)$ which maps the boundary value of the edge to the vertices connected by the edge, and for the vertex v connecting edges e and e', the value of v is defined as the the addition of values of e and e' (this addition is modulo two). For example, consider an image (one-chain) I for which I(e) = 1 for some e and all other edges are set 0. If the edge e has two end vertices u, v, then the zero-chain ∂I is the one for which v, u take value one and all other vertices take value 0. Now we are at a position to define the circle, a circle (or more rigorously a one-circle) is defined as a one chain I for which ∂I is a trivial zero-chain. The set of all circle images is denoted as ${Z}^{1}\left({\mathbb{T}}^{2}\right)$ which is nothing but the target set of the circle determination problem. The state corresponding the target function of the problem is defined as

Equation (9)

which is a local quasi-product state, thus obeying the entanglement area law, see example 2.

5. Conclusion and discussion

In this paper, the authors establish the entanglement area law of the shallow and deep neural network states. By introducing the notion of locality into the neural network representations of quantum states, we can see that the resulting local neural network states obey the entanglement area law. It is worth mentioning that there are some subtle issues in the construction here.

The first crucial issue we want to discuss is the topology of the neural network states. Each L-layer neural network architecture corresponds to an L-partite graph G, in an L-partite graph, the vertex set E(G) are divided into L disjoint subsets, which correspond to the different layers of the neural network, and there is no edges in each of these subsets (no intra-layer connections in neural network language), and we say that two neural networks are equivalent if the corresponding graphs are isomorphic, we denote the isomorphic class of neural network as $\mathcal{G}$. The neural network in an isomorphic class have the same representational power and the corresponding quantum states have completely the same physical properties. In our construction, we fix the geometry of the physical layer of the neural network, and use this fixed background geometry, we introduce the notion of locality into the neural network, since every neural network which is equivalent to this network with the fixed background geometry have the same topology, thus the properties of the state only depends on the topology not geometry of the state. This means that each time we want to study the neural network states represented by neural networks given by the equivalent class $\mathcal{G}$, we can choose a representative from $\mathcal{G}$ without loss of generalities, and for convenience, we can always choose the one with fixed background geometry.

Another issue we want to stress is the neural network approach to AdS/CFT correspondence, or more precisely, entanglement-geometry correspondence in this context. To realize the Ryu–Takayanagi formula, we first construct a neural network whose physical layer corresponds to the boundary physical degrees of freedoms, and the bulk geometry is given by the hidden layers and connections between neurons. The essential idea behind this approach to holography (or entanglement-geometry correspondence) is that the entanglement feature is encoded in the neural network geometry. That is, the bulk geometry is given, and the neural network is tiled on the background bulk geometry, the structure of the neural network provides the required entanglement features on the boundary which is exactly the dual of the bulk geometry. Note that in reference [49], the inverse problem of the above problem is investigated, where they explored how to use given entanglement features of a state to determine the optimal holographic geometry. These topics will be left for our future studies. During the preparation of our manuscript, we notice that a related work was made available [56].

Acknowledgments

ZAJ and LW are of equal contribution to this work. We acknowledge Dong-Ling Deng for discussions during his visiting at USTC, we also acknowledge I. Glasser for bringing our attentions to the works [6365], where special case of local quasi-product state is given (the local cluster states are of MPS form or more general tensor network form). ZAJ acknowledges Zhenghan Wang and the math department of UCSB for hospitality. This work was supported by the National Key Research and Development Program of China (Grant No. 2016YFA0301700), and the Anhui Initiative in Quantum Information Technologies (Grants No. AHY080000).

Footnotes

  • In general, we can regard the lattice of the system as a graph $\mathcal{S}$, for each vertex v, there is a corresponding local Hilbert space ${\mathcal{H}}_{v}$. The total space is then ${\mathcal{H}}_{\text{tot}}={\otimes }_{v\in \mathcal{S}}$. In this given graph, we can define the r-range neighborhood of a particle v as the set of all particles which are at most r-path far from v. In this way, we can define the background geometry of the system.

  • In practical applications, introducing complex weights and biases and complex activation functions may lead to some difficulties for training the neural network, to overcome this shortage, the amplitude and phase of quantum state are usually represented by two feed-forward neural network separately, see reference e.g., [40]. But here for our purpose, we choose to use the complex neural network approach.

  • We recall that the meaning of efficiency of a representation of many-body state is that the number of parameters to characterize the state increases at most polynomially at the number of particles n.

Please wait… references are loading.