Paper The following article is Open access

Investigating the potential for a limited quantum speedup on protein lattice problems

, , , , and

Published 25 October 2021 © 2021 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Carlos Outeiral et al 2021 New J. Phys. 23 103030 DOI 10.1088/1367-2630/ac29ff

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/23/10/103030

Abstract

Protein folding is a central challenge in computational biology, with important applications in molecular biology, drug discovery and catalyst design. As a hard combinatorial optimisation problem, it has been studied as a potential target problem for quantum annealing. Although several experimental implementations have been discussed in the literature, the computational scaling of these approaches has not been elucidated. In this article, we present a numerical study of quantum annealing applied to a large number of small peptide folding problems, aiming to infer useful insights for near-term applications. We present two conclusions: that even naïve quantum annealing, when applied to protein lattice folding, has the potential to outperform classical approaches, and that careful engineering of the Hamiltonians and schedules involved can deliver notable relative improvements for this problem. Overall, our results suggest that quantum algorithms may well offer improvements for problems in the protein folding and structure prediction realm.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The structure of a protein captures crucial information about its biological function and therapeutic potential [1]. Knowledge of a proteins' structure unlocks valuable biological information, ranging from the ability to predict protein–protein interactions [2], to structure-based discovery of new drugs [3] and catalysts [4]. Unfortunately, experiments to determine protein structure are challenging and require extensive time and resources [1, 5, 6]. As of May 2021, the TrEMBL database [7] contained 214 million protein sequences, while only 177 000 protein structures have been deposited in the Protein Data Bank [8]. A reliable computational algorithm for the template-free—that is, when a structurally similar protein, perhaps an evolutionary-related specimen from a different organism, is not available to use as a reference [9]—prediction of a protein's structure and its folding pathway from sequence information alone would enable annotation of millions of proteins and could stimulate major advances in biological research. However, despite steady improvements in the past six decades [1012], and recent advances in the past year [13, 14], a consistent and accurate algorithm for protein folding from sequence has remained elusive.

Over the past decade, there have been attempts to leverage quantum computing for protein structure prediction [15]. The biological structure of a protein is thought to correspond to the minimum of a free energy hypersurface, which for even small peptides is too vast to explore exhaustively [11]. A type of quantum computation that may be appropriate to help is quantum annealing, an approach to exploiting the physics of a controlled quantum system that is considered to be of potential use in optimisation problems (whether classical or quantum in nature) [16]. Typically, the set of possible solutions is mapped to a register of qubits with a binary encoding, and the objective function is represented as a physical Hamiltonian, Hproblem, whose eigenvectors and eigenvalues are respectively problem solutions and their scores. In particular, the ground state |Φ⟩ (or the respective ground eigenspace) corresponds to the global minimum of the problem. The algorithm starts by initialising the register of qubits in the ground state |Ψ(0)⟩ of a given Hamiltonian, Htrivial, whose ground state is easy to prepare, and gently transforming into the problem Hamiltonian, Hproblem. If the evolution is slow enough, the adiabatic theorem of quantum mechanics [17] ensures that the final state |Ψ(T)⟩ will be infinitesimally close to |Φ⟩.

Protein chemistry applications of quantum computing have concentrated on a simplified prototype known as the protein lattice model [18], which has been used as a coarse-grained proxy for structure prediction [19, 20] (see figure 1) as well as an invaluable testbed in early theoretical protein folding studies [39]. In this model, the protein is described by a self-avoiding walk on a lattice, whose energy is determined by the contacts between adjacent amino acids, and the minimum energy is identified with the biologically active form of the protein. Unfortunately, the problem of finding the protein configuration that minimises the energy is known to be NP-hard [21, 22]. In the context of quantum computing, several encodings (i.e. instructions to map the problem to a Hamiltonian operator and the solutions to a binary string) have been proposed [2325], some of which have also been tested experimentally in D-Wave processors [25, 26]. Recent work has attempted extensions of the protein lattice model [27], and even off-lattice models [28, 29]. Although multiple algorithmic approaches have been suggested, there is not, to our knowledge, any numerical or analytical study establishing the computational scaling of quantum annealing for protein folding applications.

Figure 1.

Figure 1. (A) Liquorice representation of a randomly generated short protein (peptide) with sequence AVSQVADGILS. In this depiction, every stick represents a bond between two atoms, and the colour of the corresponding half of the stick identifies the nature of the atom: green is carbon, white is hydrogen, blue is nitrogen and red is oxygen. The spheres that surround the sticks, representing van der Waals volume, have been coloured by residue identity. (B) Lattice model of the peptide in (A). The protein is represented as a self-avoiding walk on a lattice, where every node corresponds to a residue. Amino acids that are distant in the sequence but are spatial neighbours induce complex interactions (represented as dotted blue lines) that stabilise a particular fold. Above the dotted lines, we display the Miyazawa–Jernigan stabilisation energy of the contact.

Standard image High-resolution image

Protein sequences are constrained to a small range of hundreds, or at most thousands of amino acids. This idiosyncrasy renders considerations of asymptotic complexity, a typical focus of the quantum algorithms literature, inadequate for a practical evaluation of usefulness. The median length of a human protein is 375 amino acids [30], while the median clinical target is about 414 amino acids long [31]—and the effective length may be made smaller by considering independently-folding domains, an ubiquitous strategy in computational structure prediction pipelines. Hence, most interesting problems could be encoded with just a few hundred qubits. If there exists a relative, not necessarily asymptotic, speedup with respect to classical heuristics, this will be of interest to both basic research in molecular biology and the pharmaceutical and biotechnological industries, which dedicate significant resources to structural protein studies. Establishing the scaling of quantum annealers in problems of modest size is a crucial baseline for further exploration as near-term quantum annealers capable of addressing larger problems are made available [38].

In this article, we present an extensive numerical study of protein lattice folding in idealised, error-free, closed-system quantum annealing, such as might be achievable in future devices with long coherence times in the presence of error correction [3236], or with fault-tolerant universal quantum computers employing Hamiltonian simulation [37]. In particular, we have computed the minimum spectral gaps and optimised time-to-solution (TTS) metrics for a large dataset of hard problems. The spectral gaps display a strongly vanishing behaviour, which according to the adiabatic theorem leads one to anticipate a quadratically stronger upper bound in runtime. However, our simulations of unitary evolution reveal a scaling that is several orders of magnitude milder, and that can be optimised further via simple heuristics based on average gap positions. When comparing the quantum method to a purely classical heuristic that minimises a classical energy function we find that, in the conditions examined, the quantum annealing method is more efficient than classical simulated annealing, although further research will need to be conducted to establish if this potential advantage holds under different conditions or when compared against other potential methods.

2. Methodology

Throughout this article, we use a large dataset of peptide (i.e. short protein) sequences, each with a unique global minimum (UGEM), which have been mapped to Ising-like problems with the methods described by Babbush et al [24]. Peptide sequences with UGEMs have been shown to display the properties of real proteins, even for small sequences (e.g. [39]). The dataset is examined using numerical simulations to assess the gap and expected TTS, studied in terms of the relationship between conditions and results, and compared against an off-the-shelf classical algorithm.

2.1. Benchmark dataset generation

We examined a total of 29 503 peptide sequences, an approximately equal number of cases in both dimensions (15 173 in 2D and 14 330 in 3D) and lengths (approximately 4500 cases per length at a given dimension, with the exception of 2D length 7, where it is challenging to generate UGEM cases, and we considered only 1700 cases). To produce this dataset, we generated protein sequences by random sampling with replacement of the 20 standard amino acids (ARNDCQEGHILKMFPSTWYV). The states of these instances were enumerated by a brute force algorithm, scoring the energies using the Miyazawa–Jernigan 20-amino acid model [68], and all the sequences with two or more non-equivalent minimum energy conformations were rejected.

These sequences were mapped to an algebraic expression representing the couplings between individual spins in a programmable Ising model. This expression is often known as a polynomial unconstrained binary optimisation (PUBO) [69]. We employ the turn encoding approach by Babbush et al [24], which displays the highest reported efficiency in the number of qubits. To perform the mapping in practice, we developed a modified version of SymEngine [70], a computer algebra system written in C++, which exploits the idempotency of binary variables leading to up to a five orders of magnitude speedup. We consider protein Hamiltonians with 10 (6 aa peptide in 2D) to 16 qubits (8 aa peptide in 3D), meaning Hilbert spaces of size 1024 to 65 536.

2.2. Numerical simulations

2.2.1. Representation of the Hamiltonian

Every quantum annealing process considered in this article may be represented in the following general form:

Equation (1)

where Hprotein is the Hamiltonian expressing a given protein lattice problem,

Equation (2)

and Hcatalyst is one of the following:

Equation (3)

Equation (4)

In these equations, N is the number of qubits; λ is the relative strength of the catalyst; s is the annealing progress, defined in the interval [0, 1], and generated by some interpolation function s = f(t/T); I is the identity operator in the N-spin space i.e. I = I1I2 ⊗... IN ; and ${\sigma }_{i}^{x}$ is the Pauli X matrix applied to the ith spin in N-spin space, i.e. ${\sigma }_{i}^{x}=\left({\bigotimes}_{k=1}^{i-1}I\right)\otimes {\sigma }_{x}\otimes \left({\bigotimes}_{k=i+1}^{N}I\right)$. Note that the non-stoquastic catalyst in equation (3) includes a 1/N factor to ensure that it is in the same energy scale as the rest of the terms. It is worth pointing out that one could compare this catalyst to its stoquastic counterpart by making the interaction ferromagnetic.

In the computational basis, Hprotein can be represented as a diagonal matrix, and both Hstart and Hstq will be matrices with off-diagonal elements in defined positions. It is easy to prove that the number of null elements in a linear combination of these Hamiltonian matrices grows as $\mathcal{O}({2}^{N})$, hence significant savings in memory and processing time can be made by exploiting the sparsity of the matrix representation. In contrast, however, Hnonstq has a different sparsity pattern and can only be simulated at a much higher computational cost. Our numerical simulations relied on the PETSc library [72, 73], where the Hamiltonians were represented in the compressed sparse row or Yale format.

2.2.2. Gap evaluation

We estimated the minimum spectral gap between the ground state and the first excited state via numerical diagonalisation of the Hamiltonian matrices, using the Krylov–Schur method [71] as implemented in the SLEPc package [71]. We computed the eigenvalues in increments of Δs = 0.1, and interpolated the results using cubic splines. The gap was found as the minimum energy difference between every curve that ran into the ground state at the end of the evolution, and any other curve.

We considered only Hamiltonians without a catalyst term (i.e. λ = 0) for the evaluation of the spectral gap.

2.2.3. Quantum annealing simulation

We estimated the expected TTS by simulating the quantum dynamics of the annealer. We assumed an idealised quantum annealer at zero temperature, in the absence of noise, and with perfect control over couplings, which was simulated by numerical integration of the time-dependent Schrödinger equation:

Equation (5)

where H(t) is the time-dependent Hamiltonian defined in equation (1). We integrated this equation using the Runge–Kutta 5th order method with adaptive timestepping, as implemented in the PETSc package [72, 73]. Runge–Kutta methods have been previously validated for studying quantum annealing [74]. At the end of the evolution, the final state |Ψ(t = T)⟩ is a vector of amplitudes Ψi whose square modulus $\vert {{\Psi}}_{i}^{2}\vert $ is the probability of measuring a particular binary string in the device.

Several authors have attested the need to use optimal TTS metrics to assess the scaling of quantum annealing algorithms, e.g. [47, 64]. We employed the Bayesian optimisation package GPyOpt [57] to optimise the annealing time T. We defined an optimisation domain between 0.1 and 1000 a.u., which was considered acceptable after initial exploration.

We set up a maximum of 50 iterations for annealing programs involving a single parameter (the sample time), and a maximum of 500 iterations for trajectories including a catalyst, where the strength of the catalyst (including the energy scale multiplicative factor 1/N, i.e. λ' = λ/N) has to be optimised alongside the sample time; in the case of the non-stoquastic catalyst, given the loss of sparsity of the Hamiltonian matrix, the optimisation was stopped after 30 iterations for some of the length 9 peptides. Default parameters were used otherwise.

2.3. Optimal trajectory design

We considered the design of tailored trajectories that increase the efficiency of the algorithm. These trajectories were designed by considering the tendency of gaps to appear more often at certain stages of the annealing trajectory (see figure S8). We developed a heuristic that accounts for the relative probability of encountering a minimum gap, considering the magnitudes of all the gaps found at that position.

Our heuristic is based in dividing the annealing trajectory in small regions (Δs = 0.05) and estimating the following magnitude for each bin, which we denote 'R-score':

Equation (6)

where k is the index of the bin; the sum extends to all peptides whose minimal gap falls in that bin, indexed by j; Pkj is the normalised probability that there is a gap at that position (estimated using kernel density estimation with the Epanechnikov kernel and 0.01 bandwidth); Δkj is the magnitude of said gap; and f is a function that weighs the magnitude of the gap into the R-score. The values of the R-score are normalised and used to define a piecewise linear interpolation function that slows down in regions likely to contain a vanishing gap, and anneals at a faster rate otherwise.

We consider two types of optimisation: one employing peptides of all lengths, and a different one designing an optimal program for each length. We also consider several functional forms fkj ) for the R-score: linear (x), square root ($\sqrt{x}$) and cubic root ($\sqrt[3]{x}$). Performance comparisons for these functions are reported in figures S9S12.

2.4. Classical simulated annealing

We compared the performance of the quantum annealing to classical simulated annealing, one of the most common optimisation algorithms in computational biology, which is used in many modern structure prediction packages like Rosetta [62]. We employ an off-the-shelf classical simulated annealing subroutine, the gsl_siman.h module of the GNU Scientific Library [63]. This module requires a definition of the optimisation space, and a function that returns the energy of a given configuration. We implemented a simple subroutine to compute the energy of a sequence of lattice moves and, in order to avoid biases deriving from our implementation, we analyse the results in term of Monte Carlo moves instead of times. We note that simulated annealing is not applied to the problem in its PUBO form, but to the energy function encoded therein, as we aim for a head-to-head comparison between quantum and classical approaches.

The classical simulated annealing subroutine employs several parameters, including the number of trials per step, the number of iterations per temperature, the maximum step size of the random walk and the parameters of the Boltzmann distribution (initial and final temperature, Boltzmann constant and damping factor). All of these parameters, except for the Boltzmann constant (set to 1) and the step size (set to 1), were optimised using Bayesian optimisation in the same way parameters were tuned in numerical simulations of quantum annealing.

2.5. Statistical analysis of scaling

We assessed the significance of these results using a statistical analysis. We performed a non-linear least squares fit of our data using the lmfit library [75]. We considered four functional models: polynomial (xα ), exponential (eαx ) square exponential (${\text{e}}^{\alpha {x}^{2}}$) and cubic exponential (${\text{e}}^{\alpha {x}^{3}}$). The function was augmented by a constant β, which was set to the average value of the dependent variable for a given length.

We employed three statistical model selection criteria to decide the function that provided the best explanation of the data: the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and the mean squared error (MSE) of the means. The last method was chosen because of the nature of the dataset: the independent variable is discrete, hence the model only provides the prediction for 4 values (3 in 3D). Thus, understanding how the real means differ from the predicted means provides useful information for model selection. These methods are introduced in more detail in appendix B.

3. Spectral gap

We start our analysis by studying the minimum spectral gap, Δ, between the ground state and the first excited state. It is often stated (based on theoretical arguments) that the runtime of quantum annealing algorithms is proportional to $\mathcal{O}({{\Delta}}^{-2})$ [40]. Unfortunately, many problems exhibit an exponentially vanishing gap with increasing problem size [4143], and in particular it is believed that no form of quantum computing is able to efficiently solve NP-complete problems, or at least no such report has withstood scrutiny [40, 44]. We note that there exist numerical problems other than the gap that may impair the performance of quantum annealing, for example the precision needed to specify the PUBO, which in the Hamiltonians considered here grows rapidly (see figures S15 and S16). The distribution of gaps for the protein lattice problem in 2D and 3D, considering a stoquastic process without a catalyst, is shown in figure 2.

Figure 2.

Figure 2. (A) and (B) Distributions of log10 Δ, the decimal logarithm of the minimum spectral gap between the ground state and first excited state energy levels, for UGEM peptides of different size in 2D (A) and 3D (B). The sequence length is given in number of amino acids (aa). These violin plots employ a Gaussian kernel density estimation method to show a smooth representation of the distribution of data. In the 2D case (A), it is clear that the median gap remains approximately constant, while the worst case gap grows exponentially. In the 3D case (B), a similar effect is observed, although it is made less clear by the particular behaviour of length 8 peptides, which always have their optimal structures arranged in a cube. (C) and (D) Least-squares fit of subsamples of the data to different functional models. The rate of decay of the minimum spectral gap varies significantly between quantiles.

Standard image High-resolution image

The distribution of gaps at a given length resembles a skewed Gaussian distribution: the majority of gaps are concentrated around a narrow centre spanning two orders of magnitude, and there are long, thick tails (containing 5%–10% of the data) that spread away several more orders of magnitude to one side. The extent of these tails presents a severe decrease. In the dataset of 2D peptides, the size of the worst-case gaps decreases by five orders of magnitude within a length increase of four amino acids, but only an order of magnitude and a half within the last three lengths considered. Similar results are seen for the 3D peptides. Length 8 three-dimensional peptides show smaller gaps because of their distinct distribution of energetic levels due to cubic symmetry.

This steep decline for the hardest instances is in contrast with the small decrease experienced by an average peptide. The vast majority of the examples populate the area around the median gap, exhibiting a steady but far slower decline. A close examination of the violin plots for the 2D examples also reveals that the position of the peak of the distribution tends to rise as the sequence length increases, and in fact, if we ignore the tails of the distribution, the average gap increases rather than decreases. In interpreting this finding, it is important to keep in mind that our dataset is composed of peptide folding problems that are hard by design, since they have only one ground state solution (plus, in some cases, symmetry-equivalent configurations). These problems are known to be classically very hard [21, 22]. In addition to the lack of structure of NP-hard problems, the proportion of lowest-energy solutions in the solution space is minimal, so randomised algorithms will find it very challenging to find the ground state.

We characterised the scaling of the gap using regression analysis by maximum likelihood estimation (see section 2 for details) 5 . Four functional forms were considered: polynomial, xα , exponential, eαx , square exponential, ${\text{e}}^{\alpha {x}^{2}}$ and cubic exponential, ${\text{e}}^{\alpha {x}^{3}}$. We then employed several standard model selection criteria, detailed in appendix B, to select the model that better explains the data. The polynomial model xα , with α ≈ −0.75 in 2D and α ≈ −0.4 in 3D, is selected by all criteria, and is significantly better than the second best model.

We also binned the data into different quantiles and repeated the inference, to account for the inhomogeneity of the results (see tables S2 and S3 in appendix B). In 2D, the polynomial model is consistently selected across all quantiles, and in 3D, there are some quantiles where the exponential and cubic exponential are selected, which is probably due to the limited range of the dataset and the symmetry effect discussed above. We observed a notable variation of the coefficients across quantiles, as depicted in figures 2(C) and (D). For example, the first quantile, 0%–10%, with α = −3.23 in 2D and α = −3.65 in 3D for the polynomial model, contains examples whose gaps vanish at a notably larger rate. On the other hand, the two upper quartiles (75%–90% and 90%–100%) display positive α values, showing that the gap actually widens with increasing size. Only a portion of the problem instances exhibits fast gap vanishing.

This data does not allow us to conclude that the gap vanishes polynomially. Our results are reminiscent of a previous study on the exact cover problem by Young et al [45, 46]. In that study, it was described that, while the scaling of the spectral gap at small problem sizes is consistent with a polynomial [44], at large problem sizes the scaling turns exponential. Similarly, the fraction of problem instances exhibiting small gaps increases at large problem size [46]. We have been unable to study the behaviour of the protein lattice problem at greater sizes, given the large number of qubits and the difficulty of obtaining Hamiltonian expressions beyond 9 amino acids. However, we hypothesise that the protein lattice problem presents exponentially vanishing spectral gaps that will hinder a general polynomial-time solution by quantum annealing for large sizes, while still exhibiting reasonable advantage for problems of modest size, such as may arise in high-throughput peptide discovery experiments.

4. Simulations of stoquastic dynamics

The minimum spectral gap imposes an upper bound on the running time of quantum annealing, but in order to understand the behaviour of the process we need to access the evolution of the quantum state during the computational process. We investigate this regime by numerical integration of the Schrödinger equation. Since this procedure is costly, the assessment of our entire dataset of circa 30 000 peptides is beyond our resources and, instead, we selected two samples based on spectral gap values. The first sample contains the set of peptides with the smallest gaps (worst-case set), while the second sample is a random selection of peptides (random set). In both cases, each sample contains 100 peptides per chain length of a given dimension, giving a total of 1400 instances. We report the main features of these Hamiltonians in figures S13S16.

A comparison of this sort requires optimising the annealing time to maximise the probability of success. As described by Rønnow et al [47], a short run can provide a small, but sizeable probability that can be amplified by repetition. In many cases, the repetitions amount to a much shorter runtime than a longer, quasi-adiabatic runtime. We employed Bayesian optimisation [56, 57] to find the optimal runtime, as detailed in section 2. The optimised TTS metric, corresponding to the expected runtime to find the correct solution with probability 50%, is shown in figures 3(A) and (B).

Figure 3.

Figure 3. (A) and (B) Distributions of the expected quantum annealing runtime T of the worst-case (A) and random (B) sets. (C) and (D) Least-squares fit of subsamples of the data to different functional models for the worst-case (C) and random (D) sets. (E) and (F) Least-squares fit of subsamples of the data to different functional models for the worst-case (E) and random (F) sets.

Standard image High-resolution image

This approach connects with recent developments in what has come to be known as diabatic quantum computing [58, 59]. In this procedure, the system is allowed excitations to higher-energy states, which can result in significant speedups and is considered one of the most promising approaches for useful quantum annealing [60]. For example, numerical studies on the problem of maximum two-satisfiability (MAX-2-SAT) have demonstrated that for the hardest random instances, the optimal annealing times were significantly lower (often, several orders of magnitude) when the annealing was allowed to proceed diabatically [61].

The optimisation of the annealing time of an individual run has a significant impact. We simulated a baseline experiment in which the quantum annealer was run for 1000 a.u., and found an average improvement in the expected total runtime of 15 orders of magnitude in 2D and 10 orders of magnitude in 3D (see figure S1). We also find a small, but appreciable difference on the dependence on the gap, as depicted in table 1.

Table 1. Correlation coefficients between expected runtime and gap, for the results obtaining optimising the individual runtime and the results obtained with a fixed runtime of 1000 a.u. ρ is Spearman's rank correlation coefficient between Δ and T, which describes the monotonic relationship between the two variables (ρ = +1 is perfectly monotonic, ρ = −1 is perfectly inverse monotonic). R is Pearson's correlation coefficient between ${\mathrm{log}}_{10}\left(\frac{1}{{{\Delta}}^{2}}\right)$ and log10T.

  ρ 2D ρ 3D R 2D R 3D
Optimised time−0.66−0.440.520.38
Baseline−0.73−0.340.620.30

For both the 2D and 3D peptides, the worst-case set containing the smallest gaps does not require significantly longer expected runtimes than the random set. We performed a two-tailed Welch's t test, and found that the random and worst-case sets could not be distinguished (p-value 0.34, average p-value of subgroup analysis 0.31). In other words, the fact that a problem presents a very small minimum spectral gap does not necessarily indicate it will require a long runtime. This apparently contradictory statement might be explained by the fact that the Δ−2 scaling in relation to the spectral gap is only valid in the asymptotic limit, and that other terms may govern the dynamics at smaller sizes; and that this rule is above and after all an upper bound, and it is very possible that an instance succeeds even if the annealing is conducted diabatically. In practical terms it is not necessary to guarantee that the final state has a large fidelity with the ground state, but rather to ensure that it has a sizeable amplitude in order to obtain the expected result after enough trials. Another possibility to note is that within the range of problem sizes we inspect there may be only one (or a few) instances of the minimum (or near-minimum) gap occurring during the adiabatic sweep, whereas for large problems a near-minimum gap may occur at multiple points. In addition, the decrease by five orders of magnitude in 2D gaps discussed earlier is also markedly absent from figures 3(A) and (B).

We performed a scaling analysis identical to the previous section, finding that, for all cases, either the exponential model is selected over the polynomial, or there is not a significant difference between both of them (see tables S4S7 in appendix B). In particular, in 2D the polynomial model xα (with α ≈ 0.65) and the exponential model eαx (with α ≈ 0.15 cannot be separated. In 3D, an exponential model eαx with α ≈ 0.45 is selected with high significance.

Our findings suggest that the protein lattice problem is not as severely affected by the vanishing of the spectral gap, as might have been expected. Problems with gaps smaller than 10−2 a.u. (and down to 10−8 a.u.) do not take significantly longer than problems with a median gap of 0.22 a.u. Moreover, our results hint at an exponential scaling, albeit with a potentially small rate constant. This analysis suggests that this quantum annealing application has a milder scaling than previously expected.

5. Improvement of the annealing pathway

In the previous section, we found that optimising the sample time can dramatically enhance the performance of quantum annealing, often by several orders of magnitude. This raises the question of whether other simple changes may be able to similarly increase efficiency. In this section we explore several approaches that deliver increasing improvements to the performance of the algorithm.

We first considered whether the linear interpolation function is the most appropriate choice, or if conversely a non-linear interpolation function increases the efficiency of the algorithm. As a first approach, we experimented with three alternative functions: quadratic (x2), cubic (x3) and, in order to include a rapidly changing schedule, sigmoid (1/(1 + ez )). We optimised these functions following the same procedure as in the previous section. The results are summarised in figures S2 and S3.

Our results suggest that none of the functions considered is able to deliver a significant improvement. The quadratic function displays only minor deviations, and the cubic and sigmoid functions lead to markedly worse performance 99% of the cases. Moreover, when the runtime was reduced, the magnitude of the change was much smaller (roughly 80% of the original time to solution) than when the runtime was slowed down (140%–150% in the cubic case and 230%–300% in the sigmoid). This supports our initial choice of the linear annealing schedule to establish general trends.

We then considered the introduction of a non-stoquastic catalyst, which has been considered as a potential technique to improve quantum annealing [4850]. As detailed in the methods section, this catalyst incorporates terms ${\sigma }_{i}^{x}{\sigma }_{j}^{x}$, and is multiplied by a tunable parameter λ that was optimised alongside the sample time. Introducing a catalyst of this form alters the sparsity of the Hamiltonian matrix that our code was optimised for, and therefore we could only consider one of the interpolation functions. This is adequate since our previous examination reveals that the choice of interpolation function has a small impact on efficiency. The results of these simulations are reported in figures S4 and S5.

The introduction of a non-stoquastic catalyst improves the runtime in about 5% of the worst-case examples, showing a notable average slowdown in the random sample. The magnitude of the deterioration seems to increase with the size of the sequence, while the proportion of sequences that are improved, as well as the magnitude of said improvement, decrease with the size of the sequence. These results suggest that while the non-stoquastic catalyst may well be useful in a fraction of the hardest problems, it is not a general solution. Incidentally, our results agree with those in a recent article by Crosson et al [51], providing significant analytical and numerical evidence that, in general, stoquastic Hamiltonians are more adept at optimisation than their non-stoquastic counterparts, while allowing however for specific, cleverly engineered non-stoquastic catalysts to provide excellent (sometimes, even exponential) speedups.

The theoretical arguments in [51] led us to consider a stoquastic catalyst that perturbs the annealing trajectory while maintaining stoquasticity. We conceived a catalyst that rotates the transverse field throughout annealing with a sum of terms ${\sigma }_{i}^{x}$. Incidentally, this choice of rotation axis preserves the sparse structure of the matrix and enables rapid simulation. As in the previous case, we introduced a tunable parameter λ that was optimised alongside the sample time.

The stoquastic catalyst also improves the runtime in about 5% of the examples. In this case, however, both the proportion of sequences and the magnitude of the improvement seem to increase with the size of the sequence, and while the improvement is generally more notable in the worst-case dataset than in the random dataset, both present some sort of improvement. The magnitude of the deterioration is significantly smaller than in the non-stoquastic case, and is smaller than an order of magnitude, compared to the 5–6 orders of magnitude introduced before. These results may indicate that a stoquastic catalyst is generally preferable to a non-stoquastic one.

After exploring these alternatives, we considered the design of tailored annealing trajectories: schedules designed to slow down near the spectral gap, and therefore reduce the ratio of excitation to local minima. This improvement has been considered in several works [5255], and in this particular problem it is motivated by the tendency of spectral gaps to concentrate around certain particular positions (see figure S8). In practice, a tailored trajectory is simply a piecewise linear function, defined on equally spaced bins, where the slope of the function at a given bin is inversely proportional to the probability of finding a gap of certain magnitude. We report the heuristic employed to design these schedules in the methods section. The results of these experiments are summarised in figures S9 and S10.

Tailored trajectories 6 display better results than any of the previous approaches. Nearly half of the sequences in the random dataset, and about 80% of the worst-case dataset, experience some improvement. The magnitude of the speedup is also significantly larger, with many cases achieving accelerations between 10× and 100×, and we observe an increasing trend where longer peptides improve more than shorter ones. Unfortunately, this technique has a significant caveat: that, when the strategy fails, the deterioration can be far worse (104 to 106 slow down in some cases).

A primary reason why the tailored approach can result in such slow annealing processes is because our heuristic has to balance the positions where there is a high probability of encountering the gap, and the magnitude of said energy difference. In some cases, particularly in the longer sequences, this balance is failing. A potential solution is to employ the distribution of gaps for every peptide length, instead of considering all the data at once. We therefore designed one annealing schedule per peptide length, and simulated the results as in the previous sections. The results of this experiment are reported in figures S11 and S12.

This approach is unsurprisingly the most successful of all, improving 60% of the sequences in the random dataset, and an impressive 95% of the sequences in the bad dataset. The magnitude of the improvement is similar, although the average improvement is higher; and we do not see the significant deterioration observed in the previous approach. This results suggest a potential strategy to optimise quantum annealing further.

In practical applications, the position of the gap will not be readily available. However, we have observed that the position of the gaps can be estimated by running short annealing runs with a sudden change at different parts of the schedule. We considered a piecewise linear function, similar to the one used in the tailored trajectories, but changing the slope only at the point where we probe for the gap. When applied to our dataset, this method shows some correlation (full dataset: R = 0.17, p = 10−6; worst-case dataset: R = 0.27, p = 5 × 10−8), suggesting that more involved programs may be able to produce reasonable estimations of the gap position. We acknowledge, however, that this conclusion may not readily translate to larger sizes if the gaps become exponentially small in an exponentially small area, and that further experimentation is necessary.

The results in this section suggest that simple engineering of the Hamiltonians and trajectories involved in the annealing process can deliver a substantial increase in performance. In this sense, the results presented earlier represent a 'sensible baseline' of what is achievable with quantum annealing; and the experiments described in this section propose a pathway to improve the performance further. This also allows us to postulate that further experimentation in this domain may deliver better scalings (in relative terms, but potentially also in complexity) than previously reported.

6. Comparison with simulated annealing

We have observed that quantum annealing requires exponentially growing runtimes to find the ground state of a protein lattice model, even in the small range of lengths explored in our dataset (see figure 3). Assuming this remains true of larger systems, as one would expect, it precludes an exponential speedup, since enumerating all possible conformations of a lattice model has the same asymptotic complexity. However, an algorithm which scales significantly better, that is, with a far smaller exponential rate α than the classical case could still be useful for practical applications.

In this section, we compare quantum annealing with classical simulated annealing using the data displayed in figure 4. Unlike other comparisons of quantum annealing and classical simulated annealing (e.g. [64]), we have not constrained the classical approach to solve a problem in the Ising form. More importantly, we consider an NP-hard problem, as opposed to previous work by Albash et al [64] that considered simpler problems.

Figure 4.

Figure 4. (A) and (B) Distributions of the expected classical simulated annealing number of Monte Carlo moves N of the worst-case (A) and random (B) sets. (C) and (D) Least-squares fit of subsamples of the 2D data to different functional models for the worst-case (C) and random (D) sets. (E) and (F) Least-squares fit of subsamples of the 3D data to different functional models for the worst-case (E) and random (F) sets.

Standard image High-resolution image

The distributions presented in figures 4(A) and (B) show a rapidly growing number of Monte Carlo moves. Visually, the runtime appears to display worse-than-exponential growth. Our model comparison analysis (see tables S8S11 in appendix B) finds that, in all cases, the model fits to a square exponential ${\text{e}}^{\alpha {x}^{2}}$ with a high level of significance (and this behaviour is reproduced at every quantile).

There are some theoretical arguments (e.g. [65]) which conjecture that simulated annealing converges in exponential time. The square exponential fit found by our statistical analysis could be an artefact of parameter optimisation (note that we optimise four parameters for a single simulated annealing run, as opposed to only one in quantum annealing). This anticipates that quantum annealing provides a better scaling. Our results are made stronger by the fact that in this analysis we have considered only the number of Monte Carlo moves, i.e. the number of evaluations of the energy function. The cost of evaluating this energy is approximately quadratic on the size of the peptide, and the actual performance will be slightly worse than as depicted in figure 4. Of course, since this cost is polynomial, there will be no changes to the complexity of the algorithm, albeit the practical scaling will be worse.

These findings suggest that quantum annealing has an improved performance over simulated annealing. Over the system sizes we examined, the runtime of quantum annealing scales approximately exponentially, while simulated annealing shows a rapidly growing function that fits better to a square exponential.

7. Discussion

In this article, we have presented a numerical investigation of quantum annealing applied to protein lattice models. We have considered nearly 30 000 protein sequences, each with a unique global energy minimum, which represent realistic protein problems displaying a folded state, but are also high difficulty instances of an NP-hard problem.

We first turned our attention to the minimum spectral gap, a quantity connected theoretically to the runtime of a perfect annealer. We have observed that the gap for these protein sequences can decrease quickly in magnitude, although the scaling appears to be polynomial in the range of sizes considered. The polynomial scaling was confirmed by several statistical selection criteria (detailed in the methods section and appendix B), although comparison with prior results reported in the literature leads us to hypothesise that the gap vanishes exponentially. We have also observed that the worst cases decrease by five orders of magnitude between 6 and 9 amino acids. This numerical evidence shows that adiabatic evolution of the computer, where the probability of success nears 100%, will require rapidly growing runtimes and likely be infeasible for worst-case problems.

We then considered optimal annealing runs, where the computer is run for a shorter time to produce a small, but sizeable probability of success that is amplified by repetition. We have established that this runtime grows exponentially with peptide length, although the rate of growth was far smaller than the gap analysis suggested. The scaling was found to be approximately equal to e0.15L for 2D examples and approximately e0.75L for 3D examples within the range of problem sizes studied. We also found statistical evidence that peptides with very small gaps are not significantly harder than average cases, and that the exponential rates are almost identical for these two datasets.

We experimented with the parameters of the annealing process in pursuit of strategies to increase efficiency. We considered non-linear interpolation functions, non-stoquastic catalysts, and two adaptive programs to slow down the annealing schedule near regions where a potential gap is expected. Our results demonstrate that the last approach is able to deliver 10–100× speedups. Furthermore, we observe a trend of increasing relative improvements as the size increases: the vast majority of the peptides in the worst-case dataset were improved by this strategy, suggesting that potential decreases in performance can be addressed via careful engineering. We also suggest a method to estimate the gap position when this is not be readily available.

A comparison with classical simulated annealing on our dataset shows that the quantum annealing approach is preferable. Statistical modelling seems to suggest that the scaling of simulated annealing fits best to a square exponential, ${\text{e}}^{\alpha {x}^{2}}$ although theoretical arguments lead us to expect this behaviour to become exponential with a large rate as problem size increases. This implies that for large peptide sizes, a quantum annealer may take significantly less time than a classical machine running a stochastic algorithm. We note, however, that although classical simulated annealing is the state-of-the-art for physics-based protein structure prediction, other classical algorithms may exhibit superior performance either minimising a classical energy function, or finding the minimum of the PUBO problem that we have considered with quantum annealing.

One of the reasons why quantum annealing may prove useful for protein folding and structure prediction is the limited size of interesting problems. More than half the structures deposited in the Protein DataBank contain fewer than 500 residues, and 80% of the domains in the CATH database [66] are smaller than 200 residues. Similarly, the median length of a human protein is 375 amino acids long [30]. Even if the scaling of quantum annealing is exponential, as long as the exponential rate is low enough to fold small proteins or domains in a timely fashion, this approach will be useful for a multitude of practical problems.

There are other advantages to quantum annealing that could be explored further. When the algorithm fails, the system has been excited to a higher energy state, but although this will only be a local minimum, it may still be useful. For example, if this result is used in a bottom-up approach to explore the conformational space of a protein, it may still be a good starting point for more complex simulations. In contrast, classical simulated annealing will often fail to provide a solution that is close to the global minimum. Future work will investigate this fact, and aim to establish 'crossover' points where the scaling worsens.

We believe these results suggest that annealing approaches to quantum optimisation may be a powerful heuristic approach to solve the protein lattice problem, whether in specialised processors [38], near-term digital quantum computers [67] or otherwise. The difficulties of the quantum approach are shared by the classical simulated annealing approach, while the scaling is better. Moreover, even in cases where the annealing approach fails, it can provide solutions that despite not being equivalent to the global minimum, are very close to it, and may be useful in a subsequent refinement procedure. These findings offer encouragement for further research in quantum protein lattice folding and other hybrid quantum–classical algorithms for protein structure prediction.

Acknowledgments

CO would like to thank F Hoffmann-La Roche, UCB and the UK National Quantum Technologies Programme for financial support through an EPSRC studentship (EP/M013243/1). GMM thanks the EPSRC and MRC for support via EP/L016044/1 and EP/S024093/1. SCB acknowledges support from the EU Flagship project AQTION, the NQIT Hub (EP/M013243/1) and the QCS Hub. The authors would like to thank Daniel Nissley and Niel de Beaudrap for useful discussions.

Data availability

The data that support the findings of this study will be openly available following an embargo at the following URL/DOI: https://github.com/couteiral/proteinham. Data will be available from 15 October 2021.

Code availability

The code that supports the findings of this study will be made available after acceptance at https://github.com/couteiral/proteinham.

Appendix A.: Additional figures

See figures S1S20 and table S1.

Figure S1.

Figure S1. Distribution of the relative improvement in the time to solution metric, comparing a baseline 1000 a.u. quantum annealing procedure with an optimised sample time, for (A) the worst-case dataset in 2D, (B) the random dataset in 2D, (C) the worst-case dataset in 3D and (D) the random dataset in 3D.

Standard image High-resolution image
Figure S2.

Figure S2. Variation of the annealing process using alternative interpolation functions with respect to the linear baseline. We consider quadratic (x2), cubic (x3) and sigmoid (1/(1 + ez )) interpolations. (A) and (B) Proportion of solutions with a worse (red), identical (blue) or better (green) expected runtime than the baseline, for the worst-case (A) and random (B) datasets. (C) and (D) Proportions classified by length for the worst-case (C) and random (D) datasets. The quadratic function shows scarce deviation from the baseline, while both the cubic and sigmoid functions result in worse performances for all but a few cases.

Standard image High-resolution image
Figure S3.

Figure S3. Relative improvement of the annealing process using alternative interpolation functions with respect to the linear baseline. We consider quadratic (x2), cubic (x3) and sigmoid (1/(1 + ez )) interpolations. Relative improvement of the expected time to solution time for the worst-case (A) and random (B) datasets. (C) and (D) Relative improvement classified by length for the worst-case (C) and random (D) datasets. Relative improvement here is defined as the ratio between the expected time to solution of the optimised non-stoquastic run and the stoquastic baseline: values under 1.0 (dotted line) indicate worse performance, while values over 1.0 are improved by the choice of interpolation function. We observe that most choices of interpolation functions show worse performance than linear interpolation.

Standard image High-resolution image
Figure S4.

Figure S4. Variation of the annealing process with a non-stoquastic catalyst. (A) and (B) Proportion of solutions with a worse (red), identical (blue) or better (green) expected runtime than the baseline, for the worst-case (A) and random (B) datasets. (C) and (D) Proportions classified by length for the worst-case (C) and random (D) datasets. The non-stoquastic catalyst improves the annealing process in a reduced proportion of the worst-case examples, negatively affecting the performance of the rest.

Standard image High-resolution image
Figure S5.

Figure S5. Relative improvement of the annealing process using a non-stoquastic catalyst, with respect to the stoquastic baseline. (A) and (B) Relative improvement of the expected time to solution time for the worst-case (A) and random (B) datasets. (C) and (D) Relative improvement classified by length for the worst-case (C) and random (D) datasets. Relative improvement here is defined as the ratio between the expected time to solution of the optimised non-stoquastic run and the stoquastic baseline: values under 1.0 (dotted line) indicate worse performance, while values over 1.0 are improved by the non-stoquastic catalyst. Our results suggest that the non-stoquastic catalyst notably worsens the performance of quantum annealing in virtually all cases, with a modest improvement in a fraction of the worst-case examples. Note that some of the results for size 9 peptides had to be restricted to a smaller set of iterations due to increased computational burden, which may explain why size 9 random peptides seem to perform significantly worse than the trend would suggest.

Standard image High-resolution image
Figure S6.

Figure S6. Variation of the annealing process with a stoquastic catalyst. We consider linear (x), quadratic (x2), cubic (x3) and sigmoid (1/(1 + ez )) interpolation functions for the annealing schedule. (A) and (B) Proportion of solutions with a worse (red), identical (blue) or better (green) expected runtime than the baseline, for the worst-case (A) and random (B) datasets. (C) and (D) Proportions classified by length for the worst-case (C) and random (D) datasets. The non-stoquastic catalyst improves the annealing process in a reduced proportion of the cases, negatively affecting the performance of the rest.

Standard image High-resolution image
Figure S7.

Figure S7. Relative improvement of the annealing process using a stoquastic catalyst, with respect to the non-catalysed baseline. We consider linear (x), quadratic (x2), cubic (x3) and sigmoid (1/(1 + ez )) interpolation functions for the annealing schedule. (A) and (B) Relative improvement of the expected time to solution time for the worst-case (A) and random (B) datasets. (C) and (D) Relative improvement classified by length for the worst-case (C) and random (D) datasets. Relative improvement here is defined as the ratio between the expected time to solution of the optimised non-stoquastic run and the stoquastic baseline: values under 1.0 (dotted line) indicate worse performance, while values over 1.0 are improved by the non-stoquastic catalyst. Our results suggest that the magnitude of the improvement is much better in the worst-case dataset, with some examples reaching 20× acceleration.

Standard image High-resolution image
Figure S8.

Figure S8. Distribution of spectral gap positions and magnitudes for 2D (A) and 3D (B) peptide sequences. The distributions have been estimated using Gaussian kernel density estimation (KDE). In the case of 2D peptides, we have removed the gaps greater than 10−3 to improve the visualization of smaller gaps; in the right plot we have used all the gaps in the dataset.

Standard image High-resolution image
Figure S9.

Figure S9. Variation of the annealing process with an annealing schedule built to slow down at regions most likely to display the minimal gap for any length. We consider linear (x), square root ($\sqrt{x}$) and cubic root ($\sqrt[3]{x}$) functions to compute the R-score (see section 2). (A) and (B) Proportion of solutions with a worse (red), identical (blue) or better (green) expected runtime than the baseline, for the worst-case (A) and random (B) datasets. (C) and (D) Proportions classified by length for the worst-case (C) and random (D) datasets.

Standard image High-resolution image
Figure S10.

Figure S10. Relative improvement of the annealing process with an annealing schedule built to slow down at regions most likely to display the minimal gap for any length. We consider linear (x), square root ($\sqrt{x}$) and cubic root ($\sqrt[3]{x}$) functions to compute the R-score (see section 2). (A) and (B) Relative improvement of the expected time to solution time for the worst-case (A) and random (B) datasets. (C) and (D) Relative improvement classified by length for the worst-case (C) and random (D) datasets. Relative improvement here is defined as the ratio between the expected time to solution of the tailored trajectory over the baseline: values under 1.0 (dotted line) indicate worse performance, while values over 1.0 are improved by the tailored annealing schedule.

Standard image High-resolution image
Figure S11.

Figure S11. Variation of the annealing process with an annealing schedule built to slow down at regions most likely to display the minimal gap for the peptide's length. We consider linear (x), square root ($\sqrt{x}$) and cubic root ($\sqrt[3]{x}$) functions to compute the R-score (see section 2). (A) and (B) Proportion of solutions with a worse (red), identical (blue) or better (green) expected runtime than the baseline, for the worst-case (A) and random (B) datasets. (C) and (D) Proportions classified by length for the worst-case (C) and random (D) datasets.

Standard image High-resolution image
Figure S12.

Figure S12. Relative improvement of the annealing process with an annealing schedule built to slow down at regions most likely to display the minimal gap for the peptide's length. We consider linear (x), square root ($\sqrt{x}$) and cubic root ($\sqrt[3]{x}$) functions to compute the R-score (see section 2). (A) and (B) Relative improvement of the expected time to solution time for the worst-case (A) and random (B) datasets. (C) and (D) Relative improvement classified by length for the worst-case (C) and random (D) datasets. Relative improvement here is defined as the ratio between the expected time to solution of the tailored trajectory over the baseline: values under 1.0 (dotted line) indicate worse performance, while values over 1.0 are improved by the tailored annealing schedule.

Standard image High-resolution image
Figure S13.

Figure S13. Distribution of PUBO terms in the problem Hamiltonians. We plot the number of PUBO terms that involve exactly N spins. (A) and (B) Distribution of terms for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distribution of terms for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image
Figure S14.

Figure S14. Distribution of the average weight of the terms in the polynomial unconstrained binary optimization (PUBO) problems, stratified by dimension, dataset and protein length. (A) and (B) Distribution of average weight for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distribution of average weight for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image
Figure S15.

Figure S15. Distribution of the magnitude weakest terms in the polynomial unconstrained binary optimization (PUBO) problems, stratified by dimension, dataset and protein length. (A) and (B) Distributions for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distributions for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image
Figure S16.

Figure S16. Distribution of the ratio of the weakest to the strongest terms in the polynomial unconstrained binary optimization (PUBO) problems, stratified by dimension, dataset and protein length. (A) and (B) Distribution of the ratio for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distribution of the ratio for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image
Figure S17.

Figure S17. Validation of the cubic splines model used to analyse the gap. The eigenvalues of 120 two-dimensional Hamiltonians selected by stratified random sampling in both length and dataset (worst-case and average-case) were calculated with a grid spacing of Δs = 0.001. Cubic splines models were fit to a Δs = 0.1 grid identical to that used in section 3, and the predictions for the rest of the grid were compared with the real values. We display the distribution of R2 coefficients for all fits for the ground state and first excited state. In every case, the value is over R2 = 0.9999 (four nines), suggesting an excellent fit.

Standard image High-resolution image
Figure S18.

Figure S18. Distribution of the optimised sample annealing times for the Hamiltonians studied in section 4. (A) and (B) Distribution of the sample annealing time for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distribution of the sample annealing times for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image
Figure S19.

Figure S19. Fidelity to the instantaneous ground state for two annealing experiments (T = 1000  a.u.) with two-dimensional Hamiltonians of size 7: an average-case (FKCGYLI, Δ = 0.07) and a worst-case (YWRGLDD, Δ = 7.56 × 10−4). In the average-case, the evolution is fully adiabatic, remaining over 99.9% throughout the evolution. In the worst-case, the fidelity suddenly drops to 10−7 shortly after traversing the region with the minimum gap.

Standard image High-resolution image
Figure S20.

Figure S20. Distribution of ground state probability after an annealing simulation with T = 1000  a.u. for the Hamiltonians studied in section 4. (A) and (B) Distribution of the sample annealing time for the worst-case (A) and random (B) two-dimensional datasets. (C) and (D) Distribution of the sample annealing times for the worst-case (C) and random (D) three-dimensional datasets.

Standard image High-resolution image

Table S1. Correspondence between the size of a protein lattice problem and the number of qubits of the corresponding polynomial unconstrained binary optimization (PUBO) problem. On the left-hand side we display the numbers for 2D proteins, and on the right-hand side the figures for 3D proteins. In this work we use the turn encoding [24], where the number of qubits is linearly related to the number of amino acids in the protein. In particular, for a 2D protein of length N we require 2N − 4 fully connected qubits, while for a 3D protein we require 3N − 6.

2D protein sizeNumber of qubits3D protein sizeNumber of qubits
67610
79713
811816
913

Appendix B.: Model comparison raw data

As described in the methods section, we employ three independent statistical criteria to select the functional form that best fits the data. These criteria have been widely used in the statistical literature; we provide a brief introduction below:

  • Akaike's information criterion (AIC), an information-theoretic metric that balances the complexity of the model (first term on the right-hand side) and the consistency with the probability model (second term on the right-hand side). Models with lower AIC are considered to be of higher quality
    Equation (B.1)
  • BIC, an information-theoretic metric similar to AIC, but that also considers the size of the dataset it has been parametrized on. Models with lower BIC are considered to be of higher quality
    Equation (B.2)
  • Mean-squared error (MSE), a classical metric that considers the average error of the model at predicting the points it has been fitted on. Models with lower MSE are better at predicting a dataset, and thus are considered to be of higher quality
    Equation (B.3)

In these equations, N is the number of data points, f(x) is the fitting function, θ is the vector of adjustable parameters, k is the dimension of θ and $\mathcal{L}(\theta )$ is the likelihood function. The calculation of the likelihood function requires a model of the underlying probability distribution that generates the data. In this work, as is common in statistical analysis, we have assumed that the data is generated by a deterministic process that is corrupted by Gaussian noise. In other words, a data point is generated as:

Equation (B.4)

Here, epsilon is a normally-distributed random variable with zero mean and a non-zero standard deviation σ that is obtained from the data (i.e. ${\epsilon}\sim \mathcal{N}(0,\sigma )$). Therefore, the likelihood function for a given model f(x) is:

Equation (B.5)

In this equation, f(xr ) also depends on additional parameters which are include in θ alongside the standard deviation, e.g. the slope and the intercept if the model is linear. The parameters of the model are found by numerical maximization of the likelihood, and used to compute AIC, BIC and MSE.

Table S2. Table of statistical fits for the minimum spectral gap between the ground state and the first excited state, for the dataset of 2D protein problems. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα −3.286 ± 0.053−1.605 ± 0.012−0.779 ± 0.005−0.184 ± 0.0050.148 ± 0.0080.361 ± 0.013−0.752 ± 0.010
α eαx −1.598 ± 0.028−0.781 ± 0.007−0.379 ± 0.003−0.088 ± 0.0020.073 ± 0.0040.173 ± 0.007−0.366 ± 0.005
α ${\text{e}}^{\alpha {x}^{2}}$ −0.552 ± 0.012−0.269 ± 0.003−0.130 ± 0.001−0.029 ± 0.0010.025 ± 0.0010.057 ± 0.003−0.126 ± 0.002
α ${\text{e}}^{\alpha {x}^{3}}$ −0.181 ± 0.005−0.088 ± 0.001−0.042 ± 0.001−0.009 ± 0.0000.008 ± 0.0010.018 ± 0.001−0.041 ± 0.001
AIC xα 2210.30−2566.22−9641.00−9715.92−4488.14−2076.367421.17
AICeαx 2381.08−1849.16−8455.74−9587.50−4474.26−2016.897670.17
AIC ${\text{e}}^{\alpha {x}^{2}}$ 2771.33−540.84−6273.76−9297.51−4429.33−1884.478348.67
AIC ${\text{e}}^{\alpha {x}^{3}}$ 3021.55159.84−5094.38−9098.27−4389.71−1794.938876.53
BIC xα 2215.63−2560.49−9634.76−9709.68−4482.41−2071.037428.80
BICeαx 2386.40−1843.43−8449.50−9581.26−4468.53−2011.567677.80
BIC ${\text{e}}^{\alpha {x}^{2}}$ 2776.65−535.11−6267.52−9291.27−4423.60−1879.148356.30
BIC ${\text{e}}^{\alpha {x}^{3}}$ 3026.87165.57−5088.14−9092.03−4383.98−1789.608884.15
MSE xα 22.435 482.515 340.201 131.587 453.472 875.097 030.208 60
MSEeαx 20.791 872.516 870.392 731.677 943.399 604.839 870.399 67
MSE ${\text{e}}^{\alpha {x}^{2}}$ 18.276 692.742 690.862 781.870 083.236 164.320 810.864 82
MSE ${\text{e}}^{\alpha {x}^{3}}$ 16.625 592.851 321.158 691.991 243.125 963.994 541.158 46

Table S3. Table of statistical fits for the minimum spectral gap between the ground state and the first excited state, for the dataset of 3D protein problems. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα −3.665 ± 0.051−1.575 ± 0.011−0.850 ± 0.0040.391 ± 0.0121.162 ± 0.0101.397 ± 0.015−0.403 ± 0.014
α eαx −2.131 ± 0.029−0.911 ± 0.007−0.491 ± 0.0020.239 ± 0.0070.680 ± 0.0060.798 ± 0.010−0.231 ± 0.008
α ${\text{e}}^{\alpha {x}^{2}}$ −1.121 ± 0.017−0.473 ± 0.005−0.254 ± 0.0020.139 ± 0.0040.363 ± 0.0030.403 ± 0.007−0.117 ± 0.004
α ${\text{e}}^{\alpha {x}^{3}}$ −0.553 ± 0.010−0.231 ± 0.003−0.124 ± 0.0010.072 ± 0.0020.180 ± 0.0020.194 ± 0.004−0.056 ± 0.002
AIC xα 1021.64−4169.39−12 803.81−4379.94−4437.00−2481.665915.36
AICeαx 992.01−4058.99−12 412.42−4490.25−4677.59−2250.965936.65
AIC ${\text{e}}^{\alpha {x}^{2}}$ 1249.22−2938.47−9875.84−4675.12−4325.99−1550.746033.55
AIC ${\text{e}}^{\alpha {x}^{3}}$ 1510.35−2213.93−8464.09−4729.63−3775.90−1173.716110.65
BIC xα 1026.91−4163.71−12 797.62−4373.75−4431.33−2476.395922.93
BICeαx 997.28−4053.32−12 406.23−4484.06−4671.92−2245.695944.22
BIC ${\text{e}}^{\alpha {x}^{2}}$ 1254.49−2932.80−9869.66−4668.93−4320.32−1545.486041.12
BIC ${\text{e}}^{\alpha {x}^{3}}$ 1515.61−2208.26−8457.91−4723.44−3770.23−1168.446118.22
MSE xα 17.942 292.314 030.337 121.068 904.141 155.477 800.001 91
MSEeαx 18.053 142.313 510.344 741.115 574.165 635.313 650.009 12
MSE ${\text{e}}^{\alpha {x}^{2}}$ 17.159 022.186 610.358 621.160 653.964 424.651 490.042 26
MSE ${\text{e}}^{\alpha {x}^{3}}$ 16.057 522.051 040.364 101.149 903.716 574.146 210.068 87

Table S4. Table of statistical fits for the expected quantum annealing runtime for the worst-case dataset in 2D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.031 ± 0.0580.265 ± 0.0260.460 ± 0.0100.795 ± 0.0151.107 ± 0.0271.510 ± 0.0600.672 ± 0.025
α eαx 0.009 ± 0.0130.059 ± 0.0060.101 ± 0.0020.176 ± 0.0030.245 ± 0.0050.333 ± 0.0130.149 ± 0.005
α ${\text{e}}^{\alpha {x}^{2}}$ 0.005 ± 0.0050.022 ± 0.0020.036 ± 0.0020.064 ± 0.0020.089 ± 0.0040.120 ± 0.0080.054 ± 0.002
α ${\text{e}}^{\alpha {x}^{3}}$ 0.002 ± 0.0020.007 ± 0.0010.012 ± 0.0010.021 ± 0.0010.030 ± 0.0020.040 ± 0.0030.018 ± 0.001
AIC xα −149.70−287.88−612.57−527.89−276.94−147.00−1240.01
AICeαx −149.96−288.86−592.28−533.08−287.01−145.83−1243.19
AIC ${\text{e}}^{\alpha {x}^{2}}$ −150.45−281.14−491.06−409.62−213.34−112.17−1186.99
AIC ${\text{e}}^{\alpha {x}^{3}}$ −150.55−272.31−442.33−352.09−178.27−93.24−1129.40
BIC xα −148.01−285.82−610.01−525.35−274.91−145.31−1236.06
BICeαx −148.27−286.80−589.71−530.54−284.98−144.14−1239.24
BIC ${\text{e}}^{\alpha {x}^{2}}$ −148.76−279.08−488.49−407.08−211.31−110.48−1183.04
BIC ${\text{e}}^{\alpha {x}^{3}}$ −148.86−270.25−439.76−349.55−176.24−91.55−1125.45
MSE xα 0.282 340.114 760.032 900.012 520.130 920.480 340.002 26
MSEeαx 0.272 060.112 990.032 750.011 430.130 570.475 860.000 97
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.260 610.127 610.056 640.035 120.146 240.458 020.025 40
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.261 280.146 450.083 730.062 950.164 690.445 680.054 36

Table S5. Table of statistical fits for the expected quantum annealing runtime for the random dataset in 2D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.197 ± 0.0340.313 ± 0.0250.497 ± 0.0170.728 ± 0.0150.928 ± 0.0331.294 ± 0.0650.642 ± 0.020
α eαx 0.046 ± 0.0070.071 ± 0.0050.111 ± 0.0040.161 ± 0.0040.203 ± 0.0090.282 ± 0.0170.142 ± 0.004
α ${\text{e}}^{\alpha {x}^{2}}$ 0.018 ± 0.0030.027 ± 0.0020.041 ± 0.0020.058 ± 0.0020.073 ± 0.0050.099 ± 0.0090.052 ± 0.002
α ${\text{e}}^{\alpha {x}^{3}}$ 0.006 ± 0.0010.009 ± 0.0010.014 ± 0.0010.020 ± 0.0010.024 ± 0.0020.033 ± 0.0040.017 ± 0.001
AIC xα −186.48−282.65−491.03−514.65−248.51−137.01−1357.19
AICeαx −190.18−290.24−499.15−501.84−231.11−125.93−1351.74
AIC ${\text{e}}^{\alpha {x}^{2}}$ −193.61−289.73−456.11−402.68−186.83−98.81−1274.09
AIC ${\text{e}}^{\alpha {x}^{3}}$ −192.53−281.21−424.12−358.44−166.78−86.11−1212.57
BIC xα −184.84−280.66−488.53−512.15−246.54−135.37−1353.30
BICeαx −188.54−288.25−496.65−499.34−229.14−124.29−1347.85
BIC ${\text{e}}^{\alpha {x}^{2}}$ −191.97−287.74−453.61−400.18−184.86−97.17−1270.20
BIC ${\text{e}}^{\alpha {x}^{3}}$ −190.89−279.22−421.62−355.94−164.81−84.47−1208.68
MSE xα 0.142 490.081 530.021 620.011 970.062 270.294 820.007 06
MSEeαx 0.136 550.078 290.021 710.013 400.061 070.282 770.008 24
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.139 270.088 460.041 720.035 870.075 120.256 620.030 91
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.148 350.102 300.061 920.057 420.090 640.242 760.052 90

Table S6. Table of statistical fits for the expected quantum annealing runtime for the worst-case dataset in 3D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 1.241 ± 0.1371.344 ± 0.0961.491 ± 0.0551.773 ± 0.0212.002 ± 0.0532.343 ± 0.1711.676 ± 0.036
α eαx 0.329 ± 0.0290.351 ± 0.0190.384 ± 0.0100.445 ± 0.0060.496 ± 0.0190.571 ± 0.0520.424 ± 0.009
α ${\text{e}}^{\alpha {x}^{2}}$ 0.191 ± 0.0100.198 ± 0.0060.211 ± 0.0030.231 ± 0.0080.250 ± 0.0180.274 ± 0.0390.224 ± 0.006
α ${\text{e}}^{\alpha {x}^{3}}$ 0.099 ± 0.0040.102 ± 0.0020.107 ± 0.0020.113 ± 0.0060.120 ± 0.0110.127 ± 0.0230.111 ± 0.003
AIC xα −83.43−137.51−276.34−412.16−189.25−70.28−937.74
AICeαx −93.59−156.59−324.99−391.41−158.26−59.27−961.33
AIC ${\text{e}}^{\alpha {x}^{2}}$ −122.89−212.15−408.62−258.19−109.09−40.07−874.59
AIC ${\text{e}}^{\alpha {x}^{3}}$ −138.75−227.55−343.24−213.18−90.80−32.14−785.15
BIC xα −82.03−135.73−274.04−409.87−187.46−68.88−934.05
BICeαx −92.19−154.81−322.69−389.12−156.47−57.87−957.64
BIC ${\text{e}}^{\alpha {x}^{2}}$ −121.49−210.37−406.31−255.90−107.31−38.67−870.91
BIC ${\text{e}}^{\alpha {x}^{3}}$ −137.35−225.77−340.93−210.89−89.02−30.74−781.46
MSE xα 0.068 990.044 070.020 230.013 000.044 340.152 890.009 75
MSEeαx 0.045 700.026 940.008 000.002 210.025 620.107 060.000 02
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.060 410.052 590.043 580.041 140.050 800.081 940.040 43
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.106 480.102 660.098 070.097 140.101 780.114 010.096 81

Table S7. Table of statistical fits for the expected quantum annealing runtime for the random dataset in 3D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 1.644 ± 0.1221.733 ± 0.0911.816 ± 0.0601.942 ± 0.0542.200 ± 0.0352.815 ± 0.1751.978 ± 0.038
α eαx 0.429 ± 0.0230.448 ± 0.0160.469 ± 0.0100.499 ± 0.0080.554 ± 0.0070.688 ± 0.0550.504 ± 0.008
α ${\text{e}}^{\alpha {x}^{2}}$ 0.241 ± 0.0040.248 ± 0.0020.258 ± 0.0020.272 ± 0.0030.290 ± 0.0120.334 ± 0.0430.271 ± 0.005
α ${\text{e}}^{\alpha {x}^{3}}$ 0.123 ± 0.0020.126 ± 0.0020.130 ± 0.0030.136 ± 0.0040.143 ± 0.0090.156 ± 0.0250.135 ± 0.003
AIC xα −90.49−142.04−256.61−272.30−221.56−69.03−903.10
AICeαx −107.31−170.90−319.80−349.58−236.72−55.79−975.92
AIC ${\text{e}}^{\alpha {x}^{2}}$ −170.86−283.27−491.91−395.21−142.57−33.77−920.09
AIC ${\text{e}}^{\alpha {x}^{3}}$ −166.46−230.03−325.45−283.26−112.39−24.94−803.86
BIC xα −89.09−140.26−254.34−270.02−219.80−67.63−899.42
BICeαx −105.91−169.11−317.52−347.30−234.96−54.39−972.24
BIC ${\text{e}}^{\alpha {x}^{2}}$ −169.46−281.48−489.63−392.93−140.81−32.37−916.41
BIC ${\text{e}}^{\alpha {x}^{3}}$ −165.05−228.24−323.18−280.98−110.63−23.54−800.19
MSE xα 0.070 510.054 300.043 810.036 170.052 080.260 860.035 83
MSEeαx 0.034 390.021 620.012 380.006 230.018 760.175 190.006 09
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.043 460.036 990.031 320.028 340.034 760.095 250.028 34
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.100 020.096 020.092 510.091 050.094 990.119 410.090 93

Table S8. Table of statistical fits for the expected number of energy evaluations in classical simulated annealing for the worst-case dataset in 2D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.342 ± 0.0240.370 ± 0.0220.406 ± 0.0170.441 ± 0.0190.474 ± 0.0270.582 ± 0.0430.431 ± 0.010
α eαx 0.079 ± 0.0040.086 ± 0.0030.094 ± 0.0030.102 ± 0.0030.110 ± 0.0040.134 ± 0.0070.100 ± 0.002
α ${\text{e}}^{\alpha {x}^{2}}$ 0.031 ± 0.0000.034 ± 0.0000.037 ± 0.0000.040 ± 0.0000.043 ± 0.0000.053 ± 0.0010.039 ± 0.000
α ${\text{e}}^{\alpha {x}^{3}}$ 0.011 ± 0.0000.012 ± 0.0000.013 ± 0.0000.014 ± 0.0000.015 ± 0.0000.019 ± 0.0000.014 ± 0.000
AIC xα −221.09−302.58−502.23−484.59−281.00−174.66−1909.64
AICeαx −249.38−343.60−575.55−558.84−325.25−199.98−2139.47
AIC ${\text{e}}^{\alpha {x}^{2}}$ −368.49−553.71−956.21−866.85−525.90−270.71−2608.48
AIC ${\text{e}}^{\alpha {x}^{3}}$ −320.65−436.13−692.79−647.66−380.99−253.89−2428.29
BIC xα −219.40−300.56−499.68−482.06−278.97−172.97−1905.70
BICeαx −247.69−341.57−573.00−556.31−323.22−198.29−2135.53
BIC ${\text{e}}^{\alpha {x}^{2}}$ −366.80−551.68−953.67−864.31−523.87−269.02−2604.55
BIC ${\text{e}}^{\alpha {x}^{3}}$ −318.96−434.11−690.25−645.13−378.96−252.20−2424.35
MSE xα 0.027 630.024 680.022 330.021 790.022 820.036 390.021 77
MSEeαx 0.016 330.012 860.010 630.010 070.011 420.026 440.010 05
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.006 620.002 640.000 750.000 160.001 900.018 570.000 13
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.008 830.004 920.003 250.002 660.004 400.021 360.002 64

Table S9. Table of statistical fits for the expected number of energy evaluations in classical simulated annealing for the random dataset in 2D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.291 ± 0.0220.330 ± 0.0200.366 ± 0.0170.404 ± 0.0190.450 ± 0.0270.585 ± 0.0460.397 ± 0.010
α eαx 0.068 ± 0.0030.077 ± 0.0030.085 ± 0.0030.094 ± 0.0030.105 ± 0.0040.136 ± 0.0080.092 ± 0.002
α ${\text{e}}^{\alpha {x}^{2}}$ 0.027 ± 0.0000.030 ± 0.0000.034 ± 0.0000.037 ± 0.0000.041 ± 0.0000.054 ± 0.0010.036 ± 0.000
α ${\text{e}}^{\alpha {x}^{3}}$ 0.009 ± 0.0000.011 ± 0.0000.012 ± 0.0000.013 ± 0.0000.014 ± 0.0000.019 ± 0.0010.013 ± 0.000
AIC xα −220.91−308.49−481.97−467.88−261.99−163.71−1819.85
AICeαx −247.33−347.44−544.89−532.60−299.81−186.41−2004.83
AIC ${\text{e}}^{\alpha {x}^{2}}$ −349.17−530.83−832.72−839.80−481.49−239.03−2348.93
AIC ${\text{e}}^{\alpha {x}^{3}}$ −312.87−457.07−720.89−688.38−383.29−225.58−2278.73
BIC xα −219.27−306.49−479.48−465.39−260.04−162.07−1815.96
BICeαx −245.69−345.43−542.41−530.11−297.86−184.77−2000.94
BIC ${\text{e}}^{\alpha {x}^{2}}$ −347.53−528.83−830.23−837.31−479.54−237.39−2345.04
BIC ${\text{e}}^{\alpha {x}^{3}}$ −311.24−455.06−718.40−685.89−381.34−223.94−2274.85
MSE xα 0.028 630.023 820.021 230.020 350.021 970.043 340.020 36
MSEeαx 0.018 600.013 470.010 700.009 840.011 780.036 070.009 83
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.009 370.003 950.001 040.000 240.002 620.029 610.000 22
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.010 560.005 200.002 340.001 580.004 030.030 650.001 56

Table S10. Table of statistical fits for the expected number of energy evaluations in classical simulated annealing for the worst-case dataset in 3D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.331 ± 0.0240.359 ± 0.0200.395 ± 0.0170.439 ± 0.0190.464 ± 0.0260.493 ± 0.0320.414 ± 0.009
α eαx 0.086 ± 0.0040.093 ± 0.0040.103 ± 0.0030.114 ± 0.0040.121 ± 0.0050.128 ± 0.0060.108 ± 0.002
α ${\text{e}}^{\alpha {x}^{2}}$ 0.048 ± 0.0010.052 ± 0.0000.057 ± 0.0000.064 ± 0.0000.068 ± 0.0000.071 ± 0.0000.060 ± 0.000
α ${\text{e}}^{\alpha {x}^{3}}$ 0.025 ± 0.0000.027 ± 0.0000.029 ± 0.0000.032 ± 0.0000.034 ± 0.0000.036 ± 0.0010.031 ± 0.000
AIC xα −189.03−266.65−437.52−414.11−250.89−164.34−1697.35
AICeαx −207.15−294.11−483.51−459.26−278.90−184.10−1856.48
AIC ${\text{e}}^{\alpha {x}^{2}}$ −295.05−454.12−717.37−714.97−444.30−310.68−2316.48
AIC ${\text{e}}^{\alpha {x}^{3}}$ −276.28−384.31−622.73−613.55−373.74−230.00−2217.73
BIC xα −187.63−264.89−435.24−411.84−249.11−162.98−1693.68
BICeαx −205.74−292.35−481.23−457.00−277.12−182.73−1852.81
BIC ${\text{e}}^{\alpha {x}^{2}}$ −293.65−452.36−715.09−712.71−442.52−309.31−2312.81
BIC ${\text{e}}^{\alpha {x}^{3}}$ −274.88−382.54−620.45−611.29−371.95−228.64−2214.06
MSE xα 0.009 430.008 240.007 400.007 520.008 130.009 330.007 30
MSEeαx 0.006 070.004 840.003 960.004 050.004 730.005 950.003 84
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.002 390.001 130.000 220.000 270.001 020.002 210.000 07
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.002 740.001 520.000 620.000 660.001 420.002 530.000 46

Table S11. Table of statistical fits for the expected number of energy evaluations in classical simulated annealing for the random dataset in 3D. In bold, we indicate the model preferred by a particular model selection criterion (see section 2).

ParameterModel0%–10%10%–25%25%–50%50%–75%75%–90%90%–100%Full
α xα 0.307 ± 0.0220.350 ± 0.0200.393 ± 0.0180.434 ± 0.0180.471 ± 0.0250.511 ± 0.0310.412 ± 0.009
α eαx 0.080 ± 0.0040.091 ± 0.0040.102 ± 0.0030.113 ± 0.0030.122 ± 0.0050.132 ± 0.0050.107 ± 0.002
α ${\text{e}}^{\alpha {x}^{2}}$ 0.045 ± 0.0010.051 ± 0.0000.057 ± 0.0000.063 ± 0.0000.068 ± 0.0000.074 ± 0.0000.060 ± 0.001
α ${\text{e}}^{\alpha {x}^{3}}$ 0.023 ± 0.0000.026 ± 0.0000.029 ± 0.0000.032 ± 0.0000.035 ± 0.0010.037 ± 0.0010.030 ± 0.000
AIC xα −193.22−279.36−438.15−432.59−254.52−172.20−1732.82
AICeαx −211.13−306.42−482.39−480.03−284.26−193.96−1887.22
AIC ${\text{e}}^{\alpha {x}^{2}}$ −293.47−436.46−705.67−746.36−458.90−326.59−2286.23
AIC ${\text{e}}^{\alpha {x}^{3}}$ −278.35−410.57−663.73−621.05−354.01−228.19−2204.57
BIC xα −191.81−277.55−435.86−430.30−252.74−170.80−1729.13
BICeαx −209.72−304.61−480.10−477.74−282.48−192.56−1883.53
BIC ${\text{e}}^{\alpha {x}^{2}}$ −292.07−434.65−703.38−744.07−457.12−325.18−2282.54
BIC ${\text{e}}^{\alpha {x}^{3}}$ −276.95−408.77−661.44−618.76−352.23−226.79−2200.88
MSE xα 0.010 530.008 300.007 230.007 320.008 300.010 370.007 14
MSEeαx 0.007 310.004 950.003 840.003 910.004 950.007 050.003 74
MSE ${\text{e}}^{\alpha {x}^{2}}$ 0.003 780.001 300.000 170.000 210.001 280.003 330.000 06
MSE ${\text{e}}^{\alpha {x}^{3}}$ 0.004 130.001 680.000 570.000 620.001 650.003 590.000 47

Footnotes

  • In this section and elsewhere, we fit the data to the length of the protein problem, which we think is the most informative metric from a biological standpoint. We note that there is a linear relation between this parameter and the number of variables in the PUBO (see table S1), hence the goodness-of-fit estimations are unchanged, and the parameters resulting of the fit can be transformed trivially.

  • Here we refer to tailored trajectories employing the cubic root function in the R-score (see section 2), which are the best performing in our benchmark.

Please wait… references are loading.