Symmetric and antisymmetric kernels for machine learning problems in quantum physics and chemistry

We derive symmetric and antisymmetric kernels by symmetrizing and antisymmetrizing conventional kernels and analyze their properties. In particular, we compute the feature space dimensions of the resulting polynomial kernels, prove that the reproducing kernel Hilbert spaces induced by symmetric and antisymmetric Gaussian kernels are dense in the space of symmetric and antisymmetric functions, and propose a Slater determinant representation of the antisymmetric Gaussian kernel, which allows for an efficient evaluation even if the state space is high-dimensional. Furthermore, we show that by exploiting symmetries or antisymmetries the size of the training data set can be significantly reduced. The results are illustrated with guiding examples and simple quantum physics and chemistry applications.


Introduction
Kernel methods and neural networks are two of the most prevalent and versatile machine learning techniques. While various recent publications focus on invariant or equivariant deep learning algorithms, our goal is to derive kernel-based methods that exploit symmetries. Symmetries play an important role in many research areas such as physics and chemistry [1,2,3], but also point cloud classification problems [4] or problems defined on sets [5] are naturally permutation-invariant. One of the most prominent applications is in quantum physics. Systems of bosons require symmetric wave functions, whereas systems of fermions are represented by antisymmetric wave functions. Exploiting such symmetries of the underlying system is a popular and powerful approach that has been used to improve the performance of kernel-based methods as well as deep-learning algorithms. The goal is to obtain more accurate representations without increasing the number of training data points-resulting in more efficient learning algorithms-and to ensure that symmetry constraints are satisfied. In [1] and [2], for instance, neural networks and kernel approaches that take into account symmetries of molecules are constructed. These methods are then used for learning potential energy surfaces. An approach for constructing potential energy surfaces based on Gaussian processes combined with permutation-invariant kernels can be found in [6]. Gaussian processes that exploit symmetries by summing over permutations of identical atoms are also utilized in [7] to improve the accuracy of density functional theory descriptions. Moreover, the so-called SOAP (smooth overlap of atomic positions) kernel [8] is a popular framework to design translation-, rotation-, and permutation-invariant descriptors of molecules. In [9], general invariant kernels (capturing discrete and continuous transformations) for pattern analysis are defined and analyzed. Recently, neural network architectures for antisymmetric wavefunctions have been proposed [10,11,12,13,14] that typically operate by applying Slater determinants to the outputs. The neural networks optimize the basis functions entering the Slater determinants through a deep learning variant of a technique called backflow. Backflow is a method to modify the basis functions used in quantum Monte Carlo as trial wavefunctions [15]. Neural network approaches such as FermiNet [11] and PauliNet [12] achieve extremely high accuracy with relatively few Slater determinants compared to standard quantum chemistry methods that build Slater determinants with fixed basis functions. Kernels, on the other hand, accomplish this by mapping the data to potentially infinite-dimensional feature spaces. Any continuous antisymmetric function can be approximated by antisymmetrized universal kernels. The universal approximation of symmetric and anti-symmetric functions is also studied in [16].
In this work, we develop kernels that are intrinsically symmetric or antisymmetric. Although we focus mostly on physics and chemistry applications in what follows, the derived kernels can be used in the same way in other kernel-based supervised or unsupervised learning algorithms such as kernel principal component analysis (kernel PCA) [17], kernel canonical correlation analysis (kernel CCA) [18], or support vector machines (SVMs) [19]. The main contributions are: • We derive symmetric and antisymmetric kernels based on conventional kernels such as polynomial and Gaussian kernels and show that certain kernels can be expressed as Slater permanents or determinants.
• We analyze the feature spaces and approximation properties of such kernels.
• We demonstrate that these techniques improve the efficiency of kernel-based methods for problems exhibiting symmetries or antisymmetries.
• We apply kernel-based methods for solving the time-independent Schrödinger equation to simple quantum mechanics problems. Furthermore, we predict the boiling points of molecules using kernel ridge regression. In Section 2, we first introduce kernels, reproducing kernel Hilbert spaces, and kernelbased methods for solving the time-independent Schrödinger equation. Antisymmetric kernels will be derived in Section 3 and symmetric kernels in Section 4. These two sections contain the main theoretical results, in particular the analysis of the properties of the resulting polynomial and Gaussian kernels. Numerical results will be presented in Section 5. We conclude the paper with a list of open problems and future research.

Kernels and kernel-based methods
We will briefly recapitulate the properties of kernels and introduce the induced reproducing kernel Hilbert spaces. Additionally, we will present a kernel-based method for solving the time-independent Schrödinger equation.

Reproducing kernel Hilbert spaces
A kernel can be regarded as a similarity measure. We will focus on real-valued kernels, but the definitions can be easily extended to complex domains. Definition 2.1 (Kernel [19]). Given a non-empty set X, a function k : X×X is called kernel if there exists a Hilbert space H and a feature map φ : X → H such that For a given kernel k, the so-called Gram matrix G ∈ R m×m associated with a data set Strictly positive definite means that c G c = 0 for mutually distinct data points only if c = 0. It can be shown that a function k : X × X → R is a kernel if and only if it is symmetric, i.e., k(x, x ) = k(x , x), and positive definite (s.p.d. in what follows to avoid confusion between different notions of symmetry), see [19]. Such kernels induce so-called reproducing kernel Hilbert spaces. [20,19]). Let X be a non-empty set. A space H of functions f : X → R is called reproducing kernel Hilbert space (RKHS) with inner product · , · H if a kernel k exists such that The first requirement is called the reproducing property. For f = k(x, · ), this results in k(x, x ) = k(x, · ), k(x · ) H so that we can define the so-called canonical feature map by For more details on kernels and reproducing kernel Hilbert spaces, we refer to [20,19]. It was shown in [21,22] that not only function evaluations but also derivative evaluations can be represented as inner products in the RKHS H, provided the kernel is sufficiently smooth. Let now α = (α 1 , . . . , α d ) ∈ N d 0 be a multi-index. We define |α| = d i=1 α i as usual and, for a fixed r ∈ N 0 , the index set I r = {α ∈ N d 0 : |α| ≤ r}. Given a function f : X → R, the partial derivative of f with respect to α is defined by Theorem 2.4 ( [21,22]). Let r ∈ N 0 be a non-negative number, k ∈ C 2 r (X × X) a kernel, and H the induced RKHS. Then: (i) D α k(x, ·) ∈ H for any x ∈ X and α ∈ I r .
In (i) and (ii), the derivative D α is understood as acting on the first argument of the kernel k.
We will need this property later for the approximation of differential operators. Another question is how rich these Hilbert spaces H induced by a kernel k are. Definition 2.5 (Universal kernel [23]). Let X be compact and C(X) the space of all continuous functions mapping from X to R equipped with · ∞ . A kernel k is called universal if the induced RKHS H is dense in C(X).
That is, for a function f ∈ C(X), we can find a function g ∈ H such that g − f ∞ < ε for any ε > 0. The Gaussian kernel for instance, is universal, while the polynomial kernel is not. We will analyze the properties of these kernels and their symmetrized and antisymmetrized counterparts in more detail below. Various other notions of universality and the relationships between universal and characteristic kernels are discussed in [24]. In what follows, we will omit the subscript H if it is clear which inner product or norm we are referring to.

Kernel-based solution of the Schrödinger equation
In [25], we proposed a kernel-based method for the solution of the time-independent Schrödinger equation and the approximation of other differential operators such as the generator of the Koopman operator. We will restrict ourselves to the Schrödinger equation. Let V be a potential and H = − 2 2m ∆ + V the Hamiltonian, where is the reduced Planck constant and m the mass, then the time-independent Schrödinger equation is defined by That is, we want to compute eigenfunctions ψ and the associated eigenvalues E, which correspond to energies of the system. We define where e l is the lth unit vector, and operators Here, C 00 is the standard covariance operator (see [26]) and C 01 contains the action of the Schrödinger operator. Since these integrals typically cannot be computed in practice, we estimate them using µ-distributed training data { x (i) } m i=1 , resulting in the empirical operators Assuming that the eigenfunctions can be represented as ψ = Φu, i.e., they are contained in the space spanned by the functions { φ(x (i) )} m i=1 , we obtain a matrix eigenvalue problem where the entries of the (generalized) Gram matrices G 00 , G 10 ∈ R m×m are defined by and Eigenfunctions are then of the form A detailed derivation and numerical results for simple quantum mechanics problems-the quantum harmonic oscillator and the hydrogen atom-can be found in [25].

Antisymmetric kernels and their properties
In this section, we will introduce the notion of antisymmetric kernels and define antisymmetric counterparts of well-known kernels such as the polynomial kernel and the Gaussian kernel. Furthermore, we analyze the properties of the resulting reproducing kernel Hilbert spaces. Most results can then be carried over to the symmetric case, which will be studied in Section 4.

Antisymmetric kernels
Let X ⊂ R d be the state space. Furthermore, let S d be the symmetric group and π ∈ S d a permutation. With a slight abuse of notation, we define π(x) = [x π(1) , . . . , x π(d) ] to be the vector x ∈ X permuted by π. A function f : X → R is called antisymmetric if where sgn(π) denotes the sign of the permutation π, which is 1 if the number of transpositions is even and −1 if it is odd. We define the antisymmetrization operator A by Remark 3.1. In the same way, we can consider state spaces of the form X ⊂ dx i=1 R dy . Functions would then be antisymmetric with respect to permutations of vectors in R dy . That is, for x = [x 1 , . . . , x dx ] with x i ∈ R dy , the permuted vector is then π(x) = [x π(1) , . . . , x π(dx) ] . For typical quantum mechanics applications, for instance, d y = 3 (every particle has a position in a three-dimensional space) and d x the number of fermions (or bosons in the symmetric case). The special case d x = 2 is considered in [27], where the spectral properties of symmetric and antisymmetric pairwise kernels are analyzed. Supervised learning problems with such pairwise kernels are discussed in [28].
Our goal is to define antisymmetric kernels for arbitrary d, which can then be used in kernel-based learning algorithms.
Definition 3.2 (Antisymmetric kernel function). Let k : X × X → R be a kernel. We define an antisymmetric function k a : X × X → R by Clearly, if k(x, x ) = k(x , x), then also k a (x, x ) = k a (x , x). Furthermore, for a fixed permutation π ∈ S d , it holds that Here, we used the fact that sgn( π) = sgn( π −1 ) and sgn(π • π) = sgn(π) sgn( π). Additionally, we utilized the property that for a function g : S d → R it holds that π∈S d g(π) = π∈S d g(π • π), which corresponds to a reordering of the summands. Thus, k a is antisymmetric in both arguments. From (1) it directly follows that k a (x, x ) = 0 if at least two entries of x or x are equal. 1 Lemma 3.3. The function k a defines an s.p.d. kernel.
1 Assume w.l.o.g. that xi = xj for some indices i = j. Let π be the permutation which only swaps the positions i and j, then it holds that ka(x, x ) = ka( π(x), x ) = sgn( π) ka(x, x ) = −ka(x, x ). The separating isosurface in the middle is defined by k a (x, x ) = 0.
). That is, k a is a kernel. Symmetry was shown above. To see that the function is positive definite, let c = [c 1 , . . . , c m ] ∈ R m be a coefficient vector The antisymmetrized two-and three-dimensional Gaussian kernels are visualized in Figure 1. The feature space mapping of the antisymmetric kernel k a is the antisymmetrization operator A applied to the feature space mapping of the kernel k.
The feature space of the antisymmetrized kernel k a is spanned by the two antisymmetric functions {x 1 −x 2 , x 2 1 −x 2 2 }. This illustrates that the feature space is significantly reduced.
Polynomial kernels of arbitrary degree p for d-dimensional spaces will be discussed in more detail in Section 3.2.  multiplied by the square root of the associated eigenvalues λ, see [19]. The Mercer features of an antisymmetric kernel k a are automatically antisymmetric. This can be seen as follows: Let ϕ be an eigenfunction of T ka with corresponding eigenvalue λ, then Mercer features of the Gaussian kernel and its antisymmetric and symmetric (see Section 4) counterparts-computed by a spectral decomposition of the covariance operator, cf. [29]are shown in Figure 2.
The Gaussian kernel and the polynomial kernel are permutation-invariant since the standard inner product and induced norm are permutation-invariant, i.e., x, x = π(x), π(x ) for a permutation π ∈ S d . The antisymmetric kernel k a is permutation-invariant by construction. While many kernels used in practice are naturally permutation-invariant, an open question is whether this assumption limits the expressivity of the induced function space. We will analyze the properties of the Gaussian kernel in Section 3.3. The permutationinvariance allows us to simplify the representation of the antisymmetric kernel.
Lemma 3.7. Given a permutation-invariant kernel k, it holds that Proof. We obtain since all permutations occur d! times. In the third line, we used the same properties of permutations as above. The proof for the second representation is analogous.
For the sake of simplicity, assume now that the kernel k is permutation-invariant. We want to show that for a universal kernel k, the reproducing kernel Hilbert space induced by the corresponding antisymmetric kernel k a is dense in the space of antisymmetric functions.
Proposition 3.8. Let X be bounded. Given a universal, permutation-invariant, continuous kernel k, the space H a induced by k a is dense in the space of continuous antisymmetric functions given by C a (X) = {f ∈ C(X) | f is antisymmetric}.
Proof. Let f be antisymmetric. It follows that f (x) = sgn(π)f (π(x)) for all π ∈ S d and thus f Since k is assumed to be universal, we can find coefficients α i ∈ R and vectors Continuous antisymmetric functions can be approximated arbitrarily well by universal antisymmetric kernels such as the Gaussian kernel. Although we used the same number of data points for the approximation in the proof (i.e., n points for the expansion in terms of k and also k a ), fewer data points are required in practice if we employ the antisymmetric kernel, see Example 3.14.

Antisymmetric polynomial kernels
We have seen in Example 3.4 that the feature space dimension of the polynomial kernel of order two for X ⊂ R 2 is reduced from six to two by the antisymmetrization. Let q = (q 1 , . . . , For a d-dimensional state space X, the polynomial kernel of order p is then given by where a q = p q c q 0 q 0 ! and q 0 = p − |q|, cf. [30]. The multinomial coefficients are defined by Thus, the feature space is spanned by the monomials x q 0 ≤ |q| ≤ p and the dimension of the feature space is n φ = p+d d , see, e.g., [31]. We now want to find the feature space of the corresponding antisymmetric kernel k a . Given a multi-index q, assume that there exist two entries q i and q j with q i = q j . Since the transposition (i, j) leaves the multi-index (and thus x q ) unchanged, this monomial will be eliminated by the antisymmetrization operator. It follows that the monomials must have distinct indices. In fact, the nonzero images of monomials under antisymmetrization are of the form where δ = (d − 1, d − 2, . . . , 0) and µ = (µ 1 , . . . , µ d ) with µ 1 ≥ µ 2 ≥ · · · ≥ µ d ≥ 0 is a partition of a positive integer, 2 see [32]. The degrees of the terms of this antisymmetric polynomial are |µ| + d 2 . Since we need all monomials of order 0 ≤ |q| ≤ p, we have to consider the partitions µ of 0 ≤ p r ≤ p − d 2 . This representation uses the fact that multi-indices corresponding to antisymmetric polynomials can be written as q = δ + µ, where δ is defined as above and µ a partition. It follows that an antisymmetric polynomial must be at least of order d 2 . Equation (2) can be regarded as a Slater determinant (introduced below) for a specific set of functions. We also obtain the Vandermonde determinant (up to the sign) as a special case where µ = 0. Definition 3.9 (Partition function). Let s (n) be the function that counts the partitions of n into exactly parts. Table 1: Dimensions of the feature spaces spanned by the polynomial kernel k and its antisymmetric counterpart k a . Here, d is the dimension of the state space and p the degree of the polynomial kernel. n φ n φa n φ n φa n φ n φa n φ n φa n φ n φa n φ n φa n φ n φa 2  6  2  10  4  15  6  21  9  28  12  36  16  45  20  3  10  0  20  1  35  2  56  4  84  7  120 11 165 16  4  15  0  35  0  70  0  126  0  210  1  330  2  495  4 A closed-form expressions for s (n) is not known, but it can be expressed in terms of generating functions or computed using the recurrence relation where we define s (n) = 1 if n = 0 and = 0 and s (n) = 0 if n ≤ 0 or ≤ 0 (but not n = = 0), see [33] for more details about partitions and partition functions.
Proof. Since d 2 of the |q| exponents are already spoken for, we can use only the remaining |q| − d 2 to generate partitions µ, with 0 ≤ |q| ≤ p. All these numbers can be decomposed into at most d parts since we have only d variables. If the number of components is smaller than d, we simply add zeros. The sizes of the feature spaces of the polynomial kernels k and k a for different dimensions d and degrees p are summarized in Table 1. This shows that antisymmetric polynomial kernels might not be feasible for higher-dimensional problems. For d = 10, for example, the lowest degree of the monomials is already 45.

Antisymmetric Gaussian kernels
We will now analyze the properties of the Gaussian kernel. We have shown in Proposition 3.8 that the space spanned by the antisymmetric Gaussian kernel is dense in the space of continuous antisymmetric functions. For the Gaussian kernel, the expression obtained in Lemma 3.7 can be simplified even further.
Lemma 3.12. Let k be the Gaussian kernel with bandwidth σ, then Proof. Applying Leibniz' formula Lemma 3.7 then yields the desired result.
This decomposition is akin to the well-known Slater determinant (see, e.g., [34]), which defines an antisymmetric wave function by .
Notice that here the normalization factor is chosen in such a way that, provided the wave functions ψ i , i = 1, . . . , d, are normalized and orthogonal to each other, ψ a is normalized as well.
Remark 3.13. We can define a more general class of antisymmetric kernels. Let f : R → R be a function, then defines an antisymmetric kernel. We call such a function k a a Slater kernel. The Gaussian kernel can be obtained by setting f (r) = e − r 2 2 σ 2 and the Laplacian kernel-using the 1-norm-by setting f (r) = e − r σ . Alternatively, kernels based on generalized Slater determinants could be constructed or by concatenating creation and annihilation operators, see also [11,12,13].
The advantage of the Slater determinant formulation is that we can compute it efficiently using matrix decomposition techniques, without having to iterate over all permutations, which would be clearly infeasible for higher-dimensional problems.
Example 3.14. In order to illustrate the difference between a standard Gaussian kernel k and its antisymmetrized counterpart k a , we define an antisymmetric function f : R 2 → R by f (x) = sin(π(x 1 − x 2 )) and apply kernel ridge regression (see, e.g., [31]) to randomly sampled data points. 3 That is, we generate m data points x (i) in X = [−1, 1] × [−1, 1] and compute y (i) = f (x (i) ). We then try to recover f from the training data (x (i) , y (i) ) m i=1 . Additionally, we define an augmented data set of size 2 m by adding the antisymmetrized data set, i.e., ( , where π = (1, 2) in cycle notation. The bandwidth of the kernel is set to σ = 1 2 . The results are shown in Figure 3. We measure the root-mean-square error (RMSE)-averaged over 5000 runs-in the midpoints of a regular 30 × 30 box discretization of the domain. Kernel ridge regression using k a results in more accurate function approximations and is, for small m, numerically equivalent to kernel ridge regression using k applied to the augmented data set of size 2 m. For larger values of m, doubling the size of the data set leads to ill-conditioned matrices and increased numerical errors. 4 The example shows that the antisymmetrized kernel is indeed advantageous, it enables a more accurate representation without increasing the size of the data set. For higherdimensional problems, this effect will be even more pronounced. To obtain the same accuracy for a three-dimensional antisymmetric function, we would already need 6m data points. The kernel evaluations, on the other hand, become more expensive, but are easily parallelizable. The bottleneck of kernel-based methods is often the size of the training data set, which enters in a cubic way (since a generally dense system of linear equations has to be solved, or, if we are interested in eigenfunctions of operators associated with dynamical systems, a generalized eigenvalue problem).

Derivatives of antisymmetric kernels
For the approximation of differential operators, we will also need partial derivatives of the kernel k a . Since k a just comprises alternating sums of kernel functions k, we can compute derivatives of k a by summing over derivatives of k. For polynomial and Gaussian kernels, the derivatives of k can be found in [25]. Alternatively, the partial derivatives of the antisymmetric Gaussian kernel can be computed via Slater determinants.
Example 3.15. For the antisymmetric Gaussian kernel, let K e l ∈ R d×d be the matrix with entries Then Similar formulas can be derived for the second-order derivatives.

Symmetric kernels and their properties
Although we focused on antisymmetric functions so far, symmetric functions also play an important role in quantum physics. Other typical applications include point clouds, sets, and graphs, where the numbering of points, elements, or vertices should not impair the learning algorithms. Some of the above results can be easily carried over to the symmetric case. The special case d x = 2 is analyzed in [27]. Similar symmetrized kernels are also constructed in [6]. We focus on the analysis of the induced functions spaces.

Symmetric kernels
We call a function f : for all permutations π ∈ S d and define the symmetrization operator Definition 4.1 (Symmetric kernel function). Let k : X×X → R be a kernel. We then define a symmetric function k s : X × X → R by We simply omitted the signs of the permutations here. As before, if k(x, x ) = k(x , x), then also k s (x, x ) = k s (x , x). The function k s is permutation-symmetric in both arguments. Note that the definition of permutation-symmetry is different from permutationinvariance, which was defined by k(x, x ) = k(π(x), π(x )). Permutation-symmetric kernels are, however, automatically permutation-invariant. We briefly restate the above results for symmetric functions, the proofs are analogous to their counterparts for antisymmetric functions.
More general results for polynomial kernels will be derived in Section 4.2. Eigenfunctions of the integral operator associated with k s are symmetric. Mercer features of the symmetrized Gaussian kernel for d = 2 are shown in Figure 2.
Analogously, continuous symmetric functions can be approximated arbitrarily well by symmetric universal kernels. Proposition 4.5. Let X be bounded. Given a universal, permutation-invariant, continuous kernel k, the space H s induced by k s is dense in the space of continuous symmetric functions given by C s (X) = {f ∈ C(X) | f is symmetric}.

Symmetric polynomial kernels
Let us compute the dimensions of the feature spaces spanned by symmetrized polynomial kernels.  n φ n φs n φ n φs n φ n φs n φ n φs n φ n φs n φ n φs n φ n φs Proof. Let π be a permutation, then the multi-indices q and π(q) generate the same feature space function when we apply the symmetrization operator S to the corresponding monomials x q and x π(q) . We thus have to consider only partitions µ of the integers 0 ≤ |q| ≤ p since the ordering of the multi-indices does not matter.
This case is similar to the antisymmetric case, with the difference that we require partitions of integers up to p instead of p − d 2 . Table 2 lists the dimensions of the feature spaces spanned by the polynomial kernel k and its symmetric version k s for different combinations of d and p. Compared to the standard polynomial kernel, the number of features is significantly lower, but higher than the number of features generated by the antisymmetric polynomial kernel.

Symmetric Gaussian kernels
The symmetric kernel cannot be expressed as a Slater determinant anymore, but we can utilize a related concept. The permanent of a matrix A ∈ R d×d is defined by a π(i),i =: While for d = 2 the permanent can be written as a determinant (by flipping the sign of a 12 or a 21 ), this is not possible anymore for d ≥ 3 [35]. No polynomial-time algorithm for the computation of the permanent is known, but there are efficient approximation schemes for matrices with non-negative entries [36].
Lemma 4.7. Let k be the Gaussian kernel with bandwidth σ, then Proof. The proof is analogous to the one for Lemma 3.12. Using the definition of the permanent, we obtain The result then follows from Lemma 4.4.
Example 4.8. Assume we have a set of undirected graphs that we would like to classify or categorize. The results should not depend on the vertex labels and thus be identical for isomorphic graphs. Let A, A ∈ R d×d be the adjacency matrices of the graphs G and G , respectively. We define a Gaussian kernel for graphs by where · F denotes the Frobenius norm, and make it symmetric as described above. The only difference here is that we have to define π(A) = a π(i),π(j) d i,j=1 to permute rows and columns simultaneously. The kernel function k s (G, G ) can then be expressed in terms of so-called hyperpermanents. We have The derivation of a formula for the Laplace expansion of hyperpermanents can be found in Appendix A.1. For the considered example, we set σ = 1 and randomly generate a set of 100 undirected connected graphs of size d = 5. We then apply kernel PCA, see [17], using the symmetric kernel k s . Sorting the graphs according to the first principal component, we obtain the ordering shown in Figure 4 (only a subset of the graphs is displayed). Isomorphic graphs are grouped into the same category.
Other learning algorithms such as kernel k-means, kernel ridge regression, or support vector machines can be used in the same way, enabling us to cluster, make predictions for, or classify data where the order of elements is irrelevant.

Product or quotient representations of symmetric kernels
The aim now is to express a symmetric kernel not as a Slater permanent but as a product or quotient of antisymmetric functions. As shown in Section 3.1, an antisymmetric kernel is zero for all x for which a (non-trivial) permutation π exists such that π(x) = x. Therefore, products of antisymmetric kernels are zero for such x as well, see also Figure 5. We thus mainly restrict ourselves to quotients. Let k (1) a and k (2) a be two permutation-invariant antisymmetric kernels and k (2) a (x, x ) = 0. We define Remark 4.9. If the numerator and denominator can be written as determinants, i.e., k (1) a (x, x ) = det(K 1 ) and k (2) a (x, x ) = det(K 2 ), we obtain Example 4.10. Suppose d = 2. Let k (1) and k (2) be two Gaussian kernels with bandwidths σ 1 and σ 2 , respectively, where σ 1 < σ 2 . If the bandwidths are sufficiently small, either k (1) (x, x ) and k (2) (x, x ) or k (1) (π(x), x ) and k (2) (π(x), x ) will be close to zero (unless π(x) = x), where π = (1, 2) in cycle notation. Assume w.l.o.g. the latter holds, then which is a Gaussian with bandwidth σ satisfying 1 . This is illustrated in Figure 5. Furthermore, the limit of k s (x, x ) as x 2 → x 1 exists and is given by Product kernels cannot approximate the symmetric Gaussian kernel if x is close to the separating boundary given by The symmetric Gaussian kernel can be approximated by a quotient of antisymmetric Gaussian kernels, which can be evaluated in O(d 3 ), thus avoiding the non-polynomial complexity of the permanent. The question whether such kernels are universal is beyond the scope of this work.

Applications
In addition to the guiding examples presented above, we will illustrate the efficacy of the derived kernels with the aid of quantum physics and chemistry problems.

Particles in a one-dimensional box
Let us first consider a simple one-dimensional two-particle system. We define a potential V by Furthermore, we assume that the two particles do not interact and obtain the Schrödinger equation for 0 ≤ x 1 , x 2 ≤ L. By separating the two variables, we obtain the classical particle in a box problem, with eigenvalues E = 2 π 2 2 2mL 2 and eigenfunctions ψ (x) = 2 L sin π x L , for = 1, 2, 3, . . . , see, for instance, [37]. For the two-particle system, the eigenvalues are hence of the form E 1 , 2 = E 1 + E 2 = 2 π 2 ( 2 1 + 2 2 ) 2mL 2 and the eigenfunctions are However, since the two particles are physically indistinguishable, the wave functions must satisfy |ψ 1 , 2 (x 1 , x 2 )| 2 = |ψ 1 , 2 (x 2 , x 1 )| 2 , which implies that the functions are either symmetric (if the particles are bosons) or antisymmetric (if the particles are fermions). Let us assume that the two particles are electrons, i.e., fermions. We thus want to compute antisymmetric solutions of the time-independent Schrödinger equation by applying the approach introduced in Section 2.2, see also [25]. In the same way, we could assume that the particles are bosons and compute symmetric solutions by replacing the antisymmetric kernel by a symmetric kernel. We define = 1, m = 1, and L = π, choose the antisymmetric Gaussian kernel with bandwidth σ = 0.1, and generate m = 900 uniformly sampled points in [0, L] × [0, L]. Additionally, to ensure that the eigenfunctions are zero outside the box, we place 124 equidistantly distributed test points on the boundary and enforce ψ 1 , 2 (x 1 , x 2 ) = 0 for these boundary points. We thus have to solve a constrained eigenvalue problem and use the algorithm described in [38]. The first three eigenfunctions ψ 1,2 , ψ 1,3 , and ψ 2,3 are shown in Figure 6 and good approximations of the analytically computed eigenfunctions. The probability that the two electrons are in the same location is always zero. Furthermore, the results show that by increasing the number of data points we obtain more accurate and less noisy estimates of the true eigenvalues.
Remark 5.1. We would like to point out that • this example is just meant as an illustration of the concepts and not as a realistic physical model; • eigenfunctions with 1 = 2 are symmetric and eliminated by the antisymmetrization operation; • approximations of the eigenfunctions can be obtained using far fewer points (m < 50), but the eigenvalues will be considerably overestimated (kernels tailored to quantum mechanics applications might lead to better approximations); • the antisymmetry assumption is encoded only in the kernel, not in the Schrödinger equation itself.
This can be easily extended to the multi-particle case. We now add electron-electron interaction terms resulting in the Hamiltonian For d = 3, we randomly generate 3000 interior points and 600 boundary points to enforce Dirichlet boundary conditions. We choose a Gaussian kernel with bandwidth σ = 0.1, assemble the Gram matrices, and again solve the resulting constrained eigenvalue problem.
The results are shown in Figure 7. For the sake of comparison, we also plot the corresponding eigenfunctions of the Schrödinger equation without the electron-electron interaction. It can be seen that the eigenfunctions for the separable case are similar to the eigenfunctions where the interaction terms are included. For this particular system, the interaction terms do not seem to have a drastic effect on the system's low-lying energy states. In general, however, their effect on the electronic wavefunction can be significant. We also remark that energies and wavefunctions of the interacting system could in principle be approximated by perturbation techniques. However, due to the degeneracy of the antisymmetric states, such a perturbation analysis seems beyond the scope of this work.

Acyclic molecules
As a second example, we consider a data set of acyclic molecules [39]. The aim is to determine the boiling points of these molecules containing the elements C, H, O, and S. The data set 5 consists of 183 graphs G = (V, E) representing the molecular structures and the corresponding boiling points in degrees Celsius, see Figure 8 for a few examples of molecules included in the data set. The number of vertices |V | varies between 3 and 11, where the hydrogen atoms of the molecules are neglected. Thus, in order to compare the graphs of different sizes, we expand all adjacency matrices to R d×d with d = 11 by appending rows and columns of zeros, representing artificial isolated nodes. We define a symmetrized Laplacian kernel on graphs, cf. Example 4.8. Given the adjacency matrices A, A ∈ R d×d of the graphs G = (V, E) and G = (V , E ) as well as the kernel parameter σ > 0, we define the tensor T ∈ R d×d×d×d by The data set can be found at https://brunl01.users.greyc.fr/CHEMISTRY/. for i = j and k = l and The latter definition ensures that we avoid unwanted effects of any ordinal labeling of the nodes. Using the hyperpermanent of T , the kernel evaluation k s (G, G ) can be written as Note that we do not consider entries t i,j,k,l with either i = j, k = l or i = j, k = l since i = j ⇔ π(i) = π(j). We refer to Appendix A for different methods and simplifications for computing the hyperpermanent of T . For kernel-based (ridge) regression (see, e.g., [40]), we extract 165 adjacency matrices (≈ 90%) and their corresponding boiling point temperatures from the data set as training samples, the other data pairs constitute the test set. That is, for any G in the test set, the regression function is given by f (G) = Θ K train,G , where the vector K train,G ∈ R 165 is the Gram matrix (or kernel matrix ) corresponding to the training samples (rows) and the test sample G (column). The vector Θ ∈ R 165 is the solution of K train,train Θ = b with b being the vector of boiling points of the molecules in the training set. We then compute the average error as well the root-mean-square error in the boiling points of the test set in order to evaluate the generalizabilty of the learned regression function. We repeat each experiment 10000 times with randomly chosen training and test sets, the results for different kernel parameters σ are shown in Figure 9     The best results in terms of the average and the root-mean-square error are obtained for kernel parameters σ between 2.5 and 2.8, see Table 3 for details. According to the data set's website, the best results so far for the boiling point prediction on 90% of the set as training data and 10% as test data are achieved by applying so-called treelet kernels which exploit all possible graph/tree patterns up to a given size [39]. In this case, the average error is listed as 4.87 and the root-mean-square error as 6.75. Both values are comparable with our results.
As shown in Figure 9 (b), the entries of the Gram matrix tend to decrease for larger molecules. This effect can be explained by the expansion of the adjacency matrices and similarities of the molecules with small numbers of atoms. For instance, the first two compounds in the (ordered) data set are dimethyl ether (C 2 H 6 O) and dimethyl sulfide (C 2 H 6 S). Due to the expansion from |V | = 3 to |V | = 11, the majority of the permutations in (4) do not affect the adjacency matrix of G , cf. Appendix A.3. The block structure of the matrix arises from the ordering of the data set, i.e., each group of molecules with the same number of atoms is divided into subgroups of compounds containing one oxygen atom, two oxygen atoms, one sulfur atom, and two sulfur atoms.

Conclusion
We derived symmetric and antisymmetric kernels that can be used in kernel-based learning algorithms such as kernel PCA, kernel CCA, or support vector machines, but also to approximate symmetric or antisymmetric eigenfunctions of transfer operators or differential operators (e.g., the Koopman generator or Schrödinger operator). Potential applications range from point cloud analysis and graph classification to quantum physics and chemistry. Furthermore, we analyzed the induced reproducing kernel Hilbert spaces and resulting feature space dimensions. The effectiveness of the proposed kernels was demonstrated using guiding examples and simple benchmark problems. The next step is now to apply kernel-based methods to more complex quantum systems. Such problems might require kernels tailored to the system at hand. By exploiting additional properties (sparsity, low-rank structure, weak coupling between subsystems), it could be possible to improve the performance of kernel-based methods. Furthermore, the kernel flow approach proposed in [41] could be extended to operator estimation problems. This would allow us to also learn the kernel from data.
Another topic for future research would be to consider other types of symmetries and to develop kernels that explicitly take these properties into account. While the antisymmetric kernel can be evaluated efficiently using matrix factorizations, this is not possible for the symmetric kernel, which requires the evaluation of a matrix permanent. Utilizing efficient approximation schemes could speed up the generation of the required Gram matrices significantly. Alternatively, the product or quotient formulation of symmetric kernels could be exploited to facilitate the application of the proposed methods to higher-dimensional problems.
which results in the symmetrized Laplacian kernel for graphs. For both choices, we set t i,i,k,k = exp −1/2σ 2 and t i,i,k,k = exp(−1/σ), respectively, if a i,i = a k,k and t i,i,k,k = exp(0), otherwise. In what follows, we will consider different techniques for computing the hyperpermanent of T .

A.1. Laplace expansion for the computation of hyperpermanents
Define T (µ) ∈ R d ×4 by t (µ) i,j,k,l = which enables us to decrease the number of considered permutations significantly if d is much smaller than d.

B. Quotient representation of symmetric Gaussian kernels
Under the same assumptions as given in Example 4.10, we use Lemma 3.12 in order to write k s (x, x ) as a (x, x ) k