Molecular machine learning with conformer ensembles

Virtual screening can accelerate drug discovery by identifying promising candidates for experimental evaluation. Machine learning is a powerful method for screening, as it can learn complex structure–property relationships from experimental data and make rapid predictions over virtual libraries. Molecules inherently exist as a three-dimensional ensemble and their biological action typically occurs through supramolecular recognition. However, most deep learning approaches to molecular property prediction use a 2D graph representation as input, and in some cases a single 3D conformation. Here we investigate how the 3D information of multiple conformers, traditionally known as 4D information in the cheminformatics community, can improve molecular property prediction in deep learning models. We introduce multiple deep learning models that expand upon key architectures such as ChemProp and SchNet, adding elements such as multiple-conformer inputs and conformer attention. We then benchmark the performance trade-offs of these models on 2D, 3D and 4D representations in the prediction of drug activity using a large training set of geometrically resolved molecules. The new architectures perform significantly better than 2D models, but their performance is often just as strong with a single conformer as with many. We also find that 4D deep learning models learn interpretable attention weights for each conformer.


Introduction
Drug development is a long and costly process.Bringing a new drug to market takes an average of seven years [1] and costs $2.9 billion (USD, 2013) [2].Because so many drugs fail in late stage trials, it is critical to generate a variety of leads to increase the chance of success.Leads are often discovered by screening large chemical libraries, but these libraries are expensive to manage, and their chemistry is both homogeneous and poorly reflective of actual drug chemistry [3].Computational screening can improve this process by exploring a much larger space of compounds and identifying the top candidates for experimental testing.Such methods range from physics-based simulations, such as computational docking [4,5] and molecular dynamics [6], to data-driven regressions, such as machine learning (ML) [7][8][9].Combinations of both have also produced fruitful results [10].While docking accuracy is limited by the scoring function and force field [4], the accuracy of ML methods is limited only by the amount of available data, and ML inference is orders of magnitude faster.
Given enough data, neural networks used in ML can in principle learn any function, including the mapping from molecule to property.However, there is typically a scarcity of data for successful drugs, as the vast majority of tested molecules do not bind the target protein.It is therefore necessary to optimize the neural network architecture to best leverage the limited data.Advances in deep learning have improved ML performance by training networks directly on molecular graphs [11][12][13][14][15][16][17][18][19][20].These methods use graphs to generate learnable, continuous fingerprints, which are then used as input to a neural network.This representation encodes molecular similarity, as fingerprints predicted to have similar properties are themselves similar.
The state of the art uses 2D molecular graphs [21] (or in some cases a single 3D molecular structure [11]) to generate these fingerprints.One way to further improve the representation is to use 3D ensemble information.A molecule is neither a 2D structure nor a single 3D structure, but rather an ensemble of continuously inter-converting 3D structures.The process of drug binding is a 3D recognition event between the drug and the binding pocket, and so depends critically on the 3D structures accessible to the molecule.This is not explicitly represented in either a 2D representation or in a single 3D structure.However, this information may still be learned implicitly, as the set of conformers is in principle a function of the 2D graph only.The extent to which 2D models can implicitly learn this information, and thus match the performance of 3D models, is not currently understood.
Here we investigate whether 3D information of one or more conformers can improve ML drug screening (Fig. 1).As an example we screen molecules that can inhibit the novel SARS coronavirus 2 (SARS-CoV-2), which causes COVID-19 [22].The COVID-19 pandemic has killed millions worldwide [23], and its severity means that developing antivirals is of critical importance.We find that models based on a single conformer can better identify therapeutics than models based on 2D graphs.However, using multiple conformers does not further improve the results.We also test a transfer learning strategy to leverage the large amount of data for SARS-CoV, which causes SARS, to better predict SARS-CoV-2 inhibition.We find that transfer learning with 3D models is better than training 2D models from scratch, but that the improvement is small.We use these results to identify the best 3D model, and apply it to a set of 298,000 molecules that have not been tested experimentally.These results are then used to rank top candidates for experimental testing.

Neural network architectures
Machine learning models are trained to map an input, such as a molecule, to an output, such as binding activity.Unlike other data types, however, molecules do not have an obvious numerical representation.Various representations have been proposed, such as extended connectivity fingerprints [24][25][26] and bag-of-bonds [27] in 2D, and the Coulomb matrix, Behler-Parrinello symmetry functions [28,29], and SOAP kernels [11,30] in 3D.The current state of the art in deep learning uses message-passing neural networks [11] to learn molecular representations directly from 2D or 3D molecular graphs [12][13][14][15][16][17][18][19][20].To our knowledge, however, no models have been developed that learn from conformer ensembles.Here we discuss and extend various 2D and 3D message-passing architectures, and describe a method for applying them to conformer ensembles.

Message passing
Molecules can be represented as graphs.These graphs consist of a set of nodes (atoms) connected to each other by a set of edges.Both the nodes and edges have features.The atoms, for example, can be characterized by their atomic number and partial charge.The edges can be characterized by bond type or by interatomic distance.In the message passing phase, message-passing neural networks (MPNNs) aggregate node and edge features to create a learned fingerprint.The readout phase uses the fingerprint as input to a regressor that predicts a property [21].
The message passing phase consists of T steps, or convolutions.In what follows, superscripts denote the convolution number.The node features of the v th node are x v , and the edge features between nodes v and w are e vw .The atom features x v are first mapped to so-called hidden states h 0 v .A message is m t+1 v is created in the t th convolution, which combines h v and h w for each pair of nodes v and w with edge features e vw [21,31]: where N (v) is the set of neighbors of v in graph G, and M t is a message function.The hidden states are updated using a vertex update function U t : The readout phase then uses a function R to map the final hidden states to a property y, through In this work we use the directed message-passing ChemProp model [21], which achieves state-of-theart performance on a wide range of prediction tasks [21].In this implementation, hidden states h t vw and messages m t vw are used, rather than node-based states h t v and messages m t v .Hidden states are initialized with where which is the concatenation of the atom features x v for atom v and the bond features e vw for bond vw, and τ is an activation function.The message passing function is simply M t ( x v , x w , h t vw ) = h t vw .The edge update function is the same neural network at each step: where W m ∈ R F ×F is a learned matrix.Each message-passing phase is then for t ∈ 1, ..., T .After the final convolution, the atom representation of the molecule is recovered through The hidden states are then summed to give a feature vector for the molecule: h = v∈G h v .Properties are predicted through ŷ = f ( h), where f is a feed-forward neural network.In ChemProp the atom features are atom type, number of bonds, formal charge, chirality, number of bonded hydrogen atoms, hybridization, aromaticity, and atomic mass.The bond features are the bond type (single, double, triple, or aromatic), whether the bond is conjugated, whether it is part of a ring, and whether it contains stereochemistry (none, any, E/Z or cis/trans).All features are one-hot encodings.The ChemProp code was accessed through [32].

Learning with 3D features
A variety of graph convolutional models have been proposed for learning force fields, which map a set of 3D atomic positions of a molecular entity to an energy.Architectures designed for force fields typically do not incorporate information about the covalent connectivity [17,18,29,33] since these bonds are broken and formed during chemical reactions and may not be clearly defined.This differs from architectures for property prediction, which are typically based on 2D graphs [12,21,34] but can also use 3D information [31].
Here we combine graph-based models with 3D models in a number of ways.Our first such model is called SchNetFeatures, as it uses the SchNet force field architecture [17,18] (code adapted from [35]), but adds additional graph-based features based on the covalent connectivity.Our second model is called ChemProp3D, as it uses the ChemProp property prediction architecture, but adds additional 3D (distance-based) edge features between atoms within a distance threshold.We have also explored a reduced ChemProp3D model, in which bond states are updated only based on other covalently-bonded neighbors, and not on non-bonded neighbors.In this case the updated bond features are simply concatenated with the original distance features at each step.This model is called CP3D-NDU, for ChemProp3D with no distance updates.We also abbreviate this as CND.
In the typical SchNet model the feature vector of each atom is initialized with an embedding function.This embedding generates a random vector that is unique to every atom with a given atomic number, and is also learnable.The edge features at each step t are generated through a so-called filter network V t .The filter network generates an edge vector e vw by expanding the distance || r v − r w || in a basis of Gaussian functions.The expansion coefficients are then transformed into into e vw through linear and non-linear operations.Because only the distance between two atoms is used to create e vw , the features produced are invariant to rotations and translations.
In each convolution t + 1, the new messages and hidden vectors are given by Here, • denotes element-wise multiplication and I t denotes the so-called interaction block.The interaction block consists of a set of linear and non-linear operations applied atom-wise to the atomic features.These operations are applied before and after multiplication with V t .
For SchNetFeatures we replace the initial atomic feature embedding with the graph-based atom features described above for ChemProp.We also combine distances features with bond features to create the e vw , through Here the e dist vw are the edge features created by applying the SchNet filter network to the distances.The e bond vw are hidden bond vectors, obtained from a nonlinear operation applied to the bond features.The e bond vw are set to 0 for non-bonded atoms pairs.In the original SchNet implementation the readout layer converted each atomic feature vector into a single number, and the numbers were summed to give an energy.Consistent with the the notion of property prediction, we here instead convert the node features into a molecular fingerprint by adding the features of each node.The readout function is then applied to this fingerprint.
The usual ChemProp model creates and updates edge embeddings for each bond.For ChemProp3D we create an edge embedding for each pair of atoms separated by less than r cut .In particular, we use one initialization matrix for the distances, W dist i , and one for the bond features, W bond i .Hidden states are then initialized through The distance features e dist vw are the result of a SchNet filter network applied to the distances r vw .The bond features for non-bonded pairs are again set to 0, and the remainder of the ChemProp architecture is unchanged.
For all 3D models we use 10 Gaussian functions for each distance and a cutoff distance of 5 Å.We use three convolutions for SchNetFeatures, but use two convolutions for the ChemProp3D models because of higher memory costs.The SchNet force field uses Gaussians spaced 0.1 Å apart [17,18], which is necessary to resolve movements that lead to appreciable changes in energy.Here we use a much larger spacing (0.3-0.5 Å) because we are interested in larger-scale features of the molecule.
Resolving the structure at a finer scale would not be expected to improve performance, and could more easily lead to overfitting.

Conformer pooling
The above discussion applies to molecules associated with one geometry.However, multiple conformers correspond to a single stereochemical formula and can be considered at the same time, each one having a different statistical weight p (n) .It is not immediately clear how to pool the fingerprints of the different conformers.A simple pooling scheme would be to multiply the fingerprint of each conformer by p (n) and add the results.However, this assumes that the resultant property is a statistical average of individual conformer properties, and this is not always the case.For example, the contribution of a conformer to the binding process is not determined by its statistical weight.Rather, it is determined by the affinity of the conformer for the target, but this affinity is not known a priori, and large differences in binding energy may compensate for small differences in conformational energy.
An ideal pooling scheme would be adaptable, learning fingerprint weights that are best suited to the task at hand.For this reason we propose a pooling mechanism based on attention [36][37][38][39].Attention combines a learned vector with two feature vectors to determine the importance of one set of features to another.The resulting coefficients are then normalized to give weights for each feature vector.Similar to other applications of attention [39], we have found that it is useful to include multiple attention heads in the pooling.These different heads can learn different features to focus on in the pooling.
The notion of attention can be applied to conformer fingerprints in one of two ways.The first is to let the attention mechanism learn the importance of one conformer's features to another, giving a set of attention coefficients α nm for n, m ∈ N , where N is the number of conformers.The second, which we call linear attention, is to simply learn the conformer's overall importance, rather than its importance with respect to another conformer.This gives a set of coefficients α n for each conformer.The choice of attention or linear attention is treated as a hyperparameter to be optimized.

Incorporating statistical weights
The attention mechanism should be able to use the statistical weight of each conformer if it improves predictions.However, the fingerprint of the n th conformer, h (n) , does not take into account p (n) .To add this information we embed the weight as a vector d (n) , through where D ∈ R S×1 is a learned matrix, S is the dimension of d (n) , and b is a learned bias.The softmax activation means that d (n) k can be interpreted as the projection of p (n) onto the k th probability bin.Here we choose S = 10 so that the probabilities are divided into 10 bins.A linear layer is then applied to the concatenation of d (n) with h (n) , yielding the final conformer fingerprint q (n) : Here, H ∈ R F ×(F +S) is a learned matrix, and b is again a learned bias.n) .The new fingerprints q (n) are aggregated through an attention mechanism to yield the final fingerprint Q.

Computing attention coefficients
For the pairwise attention method we then compute the attention coefficients c nm between conformers n and m as where a ∈ R 2F is a vector, A ∈ R F ×F is a learned matrix, and • is the dot product.For ease of notation we have supressed the dependence on the attention head index.The attention paid by conformer n to conformer m is then The pooled fingerprint Q is a weighted sum over each fingerprint followed by an activation function τ , such that Here N is the number of conformers, and the division by N ensures that nm α nm /N = 1.For the linear attention mechanism only one conformer is used to determine each coefficient, through , where a ∈ R F .The coefficients α n are normalized by summing over one index only, and the pooled conformers are Adding a superscript to denote fingerprints from different heads, the final fingerprint resulting from K linear or pair-wise attention heads is then A schematic of the fingerprinting architecture is shown in Fig. 2.

Obtaining conformers
To train a model on 3D ensemble information, one must first reduce the infinite set of 3D structures to a finite set of conformers, the set of thermally accessible structures at local minima on the potential energy surface.However, generating accurate conformers is challenging.A variety of exhaustive, stochastic, and Bayesian methods have been developed to generate conformers [40][41][42][43][44][45][46][47].The stochastic conformer generation methods used in cheminformatics packages [46] are not exhaustive and may miss low-energy structures, and exhaustive sampling has prohibitive exponential scaling with the number of rotatable bonds.Further, the classical force fields used are generally not accurate enough to assign conformer orderings based on energy [48].
Significant progress in conformer generation has been made with the CREST program [49], which uses semi-empirical quantum chemistry and advanced sampling methods to address these issues.We recently used CREST to create the GEOM dataset [50][51][52], which contains conformers annotated by energy for over 300,000 drug-like molecules and 130,000 combinatorially generated small molecules [53].We use this dataset to train the 3D conformer-based models described below.

Training
To screen COVID-19 therapeutics, we first compared 2D and 3D approaches for different targets to identify the best model-target pair.In particular we examined experimental data for specific inhibition of the SARS-CoV-2 3CL protease ("CoV-2 CL") [54], and for general inhibition of SARS-CoV-2, measured in vitro in human cells [55,56].CoV-2 CL has 96% sequence similarity to its counterpart CoV CL [57], the analogous protein in the virus causing SARS.Because there is 1000 times more data for CoV 3CL than for CoV-2 3CL (Table 1), and because the sequences are so similar, we also explored using transfer learning (TL) from CoV 3CL to CoV-2 3CL.
We used the conformers from the GEOM dataset as inputs to different 3D models (SchNetFeatures, ChemProp3D, and ChemProp3D-NDU).For comparison we also trained models with only the single highest probability conformer, and models with only graph information (ChemProp, and random forest over circular fingerprints).These networks were first trained on the CoV 3CL task and the two CoV-2 tasks.The CoV 3CL models were then used to generate fixed features for molecules in the CoV-2 3CL dataset for TL.
In all cases we used a 60/20/20 train/validation/test split with species split by scaffold, as described in Ref. [21].This approach groups molecules with the same scaffolds into the same split.This makes the classification task harder than a random split, as the model cannot simply identify a scaffold from the training set and apply it to the validation and test sets.Because the conformers were generated with CREST, which allows for reactivity like keto-enol tautomerism, some of the species had conformers with different molecular graphs from each other.To avoid this issue we removed all such species, which accounted for about 5% of the total.(It would also be possible to input the 3D information along the corresponding new molecular connectivity.)This is why there are fewer species in Table 1 than in the original sources, or in the associated datasets found in [58].The data used are available in [51].
Hyperparameter optimization was performed for each model type and each individual task using Bayesian optimization [59].In all cases we scored models by their best validation scores rather than test scores, to avoid biasing the models toward molecules in the test set.Models were then trained on full datasets.For less intensive models like ChemProp with small amounts of data, uncertainty was determined by training ten different models and calculating the standard deviation in their scores.
The models evaluated on the test set were the models with the best performance on the validation set, according to the metric of interest (e.g. for evaluating the test set PRC, the model with the best validation PRC was used).This method of uncertainty quantification was not feasible for any of the 3D models, or for ChemProp trained on the CoV 3CL data.In these cases we used two different models from different epochs in the same training progression.Each model had the best validation score according to either the PRC or ROC metric.Averages and standard deviations were then calculated from two scores, one from each of the two models evaluated on the test set.This approach is justified because, in many cases, the model with the best validation PRC/ROC was not the one with the best test PRC/ROC.Note, however, that this method gives zero uncertainty whenever the epoch with the best ROC also has the best PRC.Further details of the training process can be found in the SM, and exact parameters used for each of the different models can be found in [51].Table 2: Performance of various models on the three classification tasks, evaluated with the ROC and PRC scores.Uncertainty on the last digit is denoted by a subscript, and is given here by the standard deviation of test scores.Uncertainties less than 10 −3 are recorded as zero.Due to computational constraints we did not train ChemProp3D (200 confs) on the CoV 3CL task.

2D and 3D performance
Results are given in Table 2.The 3D models are compared with the 2D-based ChemProp and random forest models, with the random forest trained on Morgan fingerprints.We begin by analyzing performance on CoV-2 3CL, the task with the smallest amount of data, and then proceed to datasets of increasing size (CoV-2 followed by CoV 3CL).The random forest model performs best on the CoV-2 3CL task.It is followed closely by ChemProp and CND with one conformer ("1-C").The other 3D models struggle: SchNetFeatures and CND (200-C) have mediocre PRC scores, while ChemProp3D has low PRC and ROC scores.It is clear that 3D information does not help with the CoV-2 3CL task.
In fact, it is not even necessary to use an MPNN, as a simple random forest outperforms all neural networks.This task has the smallest amount of data (485 training species), which may explain the results: with so little data, it is difficult for the neural networks to learn meaningful representations.The learned representations are likely no more powerful than the Morgan fingerprints used in random forest; this is often the case in low-data regimes [60].This is especially true of the 3D models with distance updates, as they haven an abundance of input information with few labels to guide the representation learning.This may make the 3D models especially prone to overfitting.score were simply equal to the next best 3D models, it would still be double that of the 2D scores.
For this task it is clear that 3D information can significantly improve performance.
With the exception of the ChemProp3D models, which have poor ROC scores, all of the 3D models outperform the 2D models.CND (200-C) performs similarly to 1-C, while SchNetFeatures (200-C) improves the PRC by 32%.It is noteworthy that using a single conformer is better than using many.All information contained in a 1-C representation is also available in 200-C, so it seems that the model training is hampered by the extra information.This is particularly intriguing because the attention mechanism learns meaningful information about the conformers (see below), focusing on geometries that are similar to those of other hits.The model may be overfitting to the extra information, despite the hyperparameter optimization to determine reasonable dropout rates.
The best CoV 3CL models are ChemProp, random forest and CND.CND (200-C) is essentially tied with ChemProp for the best ROC score, while random forest has a slightly higher PRC score than CND (1-C).We see again that CND is a robust model that can improve on ChemProp results, but that extra conformers do not improve performance.We also see that the random forest model is competitive with CND.The remaining 3D models are outperformed by 2D models, but perform more comparably to 2D models than they did in the CoV-2 3CL task.This could perhaps be due to the extra training data.

Transfer learning
We also analyzed how TL could improve CoV-2 3CL scores.To do so we used the pre-trained CoV 3CL models to generate fingerprints, and used the fingerprints as input to a standard feed-forward neural network.We did this with and without additional message passing from ChemProp.When additional message passing was used, the fixed fingerprints were concatenated with the learned ChemProp fingerprints.We used pretrained 3D models with the best validation losses, and pretrained 2D models with the best validation score on the target metric.Hyperparameters were again optimized separately for each model.Interestingly, CND is the only model that improves accuracy, despite the fact that it contains the least amount of 3D information and the least amount of conformer attention (see below).TL from other 3D models can even hurt performance, as in the case of SchNetFeatures.However, it is noteworthy that the 3D representations are more robust than the TL 2D representation: all 3D models without message passing have an ROC score above 0.8, while TL ChemProp has a score of 0.65.The benefit of using TL on related targets is thus far from clear; further work is needed to determine its relevance to drug discovery.

Screening untested compounds
On the basis of model performance, we chose the CP3D-NDU (1-C) model to screen molecules for general CoV-2 inhibition.We chose this target over CoV-2 3CL because 2D models perform best on CoV-2 3CL, and these are already available to the community, whereas 3D models perform best on the general CoV-2 task.
To best use the available data we retrained the model using five-fold cross validation.We split the data into five different 80/20 training/validation splits and trained five different sets of models.We did not exclude species whose conformers had different graphs, as we were only using a single conformer.We then selected one model for each of the two validation metrics (ROC and PRC), giving 10 models in total.Each model was applied to the remainder of the GEOM dataset, which consists of 298,538 untested molecules.The score was computed as the average of 10 scores.The molecules ranked by score are available at [51].The top 10 molecules are shown in Figure 3.The model assigns high scores to a number of thioamides (carbon with a single bond to nitrogen and a double bond to sulfur).For example, the top candidate is a thiourea that has been used to extend lifetime of eukaryotic cells [61].Thioamides are also part of the structure of the most common hyperthyroidism drugs [62].The model also highly ranks a number of molecules with Phe-C-Phe moieties, and molecules with many methoxy groups.Future experimental testing will evaluate the accuracy of these predictions.

Accuracy
Adding 3D information can improve prediction accuracy when using the right model.Of all the models introduced, the new CND model shows the strongest and most consistent performance across tasks, either matching or beating 2D baselines.CND (1-C) in particular shows the most consistent performance of all models, even on the CoV-2 3CL task with little data.However, many 3D models do not improve performance, and some can actually hurt it.For example, ChemProp3D is outperformed by 2D models in every task, while SchNetFeatures improves results in CoV-2 but hurts them in CoV 3CL.It is also clear that adding extra conformers does not improve performance.Given the enormous computational cost of training the 200-C models (see below), it is a far better use of resources to train 1-C models than 200-C models, and in particular the CP3D-NDU ( Table 4: Average cosine similarity between E3FP fingerprints of different species.A single conformer was selected for each species, either randomly or based on highest attention weight from a SchNetFeatures model with the lowest validation loss."hit/hit" means similarity among hits, "hit/miss" means similarity between hits and misses, and ∆ = (hit/hit) − (hit/miss).The highest ∆ scores for each task are shown in bold.5,000 comparisons were made for each category.Uncertainty on the last digit is denoted by a subscript and given by the standard error of the mean.
somewhat improve accuracy, but the effect is small and absent in all but the CND model.Much of the improvement can be matched with a simple 2D baseline like random forest.

Conformer importance
Given the struggles of conformer-based models, it is important to understand how conformers are pooled to see if the models are learning meaningful information from them.To this end we analyzed the similarity of high-attention conformers in different species.For each comparison we randomly selected two species, and for each species we selected the conformer with the highest attention weight among all attention heads.We then computed the fingerprints of the two conformers using the E3FP method [63], which is an extension of the extended connectivity fingerprint to 3D structures.The cosine similarity metric was then applied to the fingerprints.This was repeated for 5,000 random pairs in the test set to obtain the average similarity.We applied this analysis separately to pairs of species that were both hits, and to pairs of species that contained one hit and one miss.We also repeated the analysis with a random selection of conformers for comparison.
Results of this analysis are shown in Table 4. 1 Random selection of conformers yields hits that are more similar to each other than to misses.Choosing conformers based on attention increases the similarity of hits to each other, while leaving the similarity of hits to misses virtually unchanged.The difference between hit/hit and hit/miss scores, denoted by ∆, is on average 40% larger for conformers chosen by attention than conformers chosen randomly.This means that it is easier to distinguish hits from misses using attention conformers than using random conformers.Interestingly, unlike the other models, CND (200-C) did not learn meaningful attention weights for the CoV 3CL task (it assigned equal weights to nearly every conformer), but generated the most useful fingerprints for the transfer learning task.This may be because it contains the most 3D information without biasing the conformers toward a different protein, even if that protein has high sequence similarity to the target.

Computational cost
The conformer-based 3D models are quite computationally demanding.3D models are generally more expensive than 2D models, as they incorporate non-bonded edges in addition to bonded ones.More importantly, conformer-based models are n conf times more expensive than single conformer models, where n conf is the average number of conformers.Here we used a maximum of 200 conformers per species, which corresponds to 76 on average.Hence our models are hundreds of times slower than 2D models, and our training on the CoV 3CL dataset took several days on 32 GPUs.
It is not beneficial to use conformer models in their current state, as they are expensive and can hurt performance.However, given the sheer volume of extra information contained in conformers, it is helpful to think of other ways to use the ensemble.Let us consider how model complexity can be reduced, as this would reduce computational cost and could also improve performance.A simple method is to bin similar conformers together, so that the number of effective conformers is reduced.The statistical weight of each conformer would be the sum of its constituent weights.One might also imagine working with a single effective conformer.After aligning the conformers, edge features would be given a weighted sum of the edge features of each conformer.The weights could be either statistical probabilities or learned attention weights.Alternatively, one could first determine the conformer of each species that is most likely to bind, and then train a model on a single conformer only.This could perhaps be done with docking, or with Monte Carlo simulations to maximize conformer similarity among hits.

Conclusion
We have introduced a new method for predicting properties from conformer ensembles, and introduced several new 3D-based models.We achieved significant improvement over 2D models with a single conformer, but found that multiple conformers did not improve the predictions.We then applied the best single conformer 3D model to identify top candidates for SARS-COV-2 inhibition, and made the ranking publicly available.With access to the GEOM dataset and the models introduced in this work, the community will be able to use our models and refine our conformer approach to improve virtual screening.

Acknowledgements
The

Data availability
The code used to train the models is available at [64].Datasets, splits, trained models and performance statistics can be found at [51].
Supplementary Material for Molecular machine learning with conformer ensembles

A Saved datasets and models
The code for creating datasets and training models is available in the Neural Force Field repository [64], which contains a variety of 3D-based models [17][18][19] to predict either molecular properties or atomic forces.The force field can handle periodic structures, generate coarse-grained forces [65,66], and run molecular dynamics through the Atomic Simulation Environment [67].The property predictor extends SchNet [17,18] and ChemProp [21] to predict properties based on one or more conformer, and contains scripts with extensive documentation for dataset generation, training, hyperparameter optimization, and transfer learning.
The trained models and datasets themselves can be found at [51].Both the model and dataset folders have sub-folders for each prediction task, with the model folder further split by network architecture.Each network folder contains log files with training and validation scores for each epoch, models saved at every epoch, the parameters used to create and train the model, fingerprints and attention weights generated for the test set, and test set predictions.There are also folders with ChemProp models trained from scratch, ChemProp models used for transfer learning, and parameter scores from hyperparameter optimization.Further details of the data layout can be found in the accompanying README file.

B Data pre-processing
For multiple conformer models we excluded molecules whose conformers had different graphs from each other.The graphs sometimes differed because of reactivity in the CREST simulations.A small number of conformers also had bond lengths greater than the cutoff distance.We excluded these species because the model identifies the subset of the neighbors that are bonded and assigns bond features to them.This means that bond features would be missing if any bond length exceeded the cutoff.Excluding species for the above reasons left an average of 96% of the original dataset.
Due to computational constraints we limited the dataset to the 200 highest-probability conformers of molecules with 100 atoms or fewer.The probabilities were renormalized to account for the missing conformer probabilities.The constraint on the number of atoms excluded only 0.2% of molecules.The conformer constraint was more restrictive, but still reasonable.85% of the molecules had 200 conformers or fewer.On average, a limit of 200 conformers recovered 94% of the total number of conformers of a species.

C Training
The training process was very computationally demanding.First, the dataset contained 30 million geometries.Second, the computational graph of each species contained up to 200 separate graphs for each fingerprint.Since the fingerprints of a species were pooled and used as input to the readout layer, none of the computational graphs could be broken until all fingerprints had been created and pooled.Third, the ChemProp3D models had to store and update edge features.For a molecule with M atoms and an average of m neighbors per atom, this means that M • m directed edge features had to be updated, whereas a node-based MPNN would only update M node features.The average atom had m = 11 neighbors.Moreover, the neighbors of each edge also had to be aggregated, which further increased computational cost.
To address these challenges we parallelized training over 32 Nvidia Volta V100 GPUs (32 GB of memory each), and for ChemProp3D models performed batching over conformers.Each batch consisted of 4-7 conformers for one species, which was the largest value that could fit in GPU memory.This value is rather small, as memory also had to be allocated for the computational graph of all conformers.Fingerprints were generated for each batch in succession.The fingerprints were pooled once all batches were finished.The pooled fingerprints were used as input for the readout layer, which generated a prediction.The prediction was used to update the loss gradient.The computational graph was erased, freeing memory to repeat the process.This process was repeated twice on each GPU, generating a loss gradient from 64 species.The gradient was then used for a training step.3D models were trained with an initial learning rate of 10 −4 .The learning rate was reduced by a factor of two if the validation loss had not decreased in 10 epochs.The training continued until the learning rate reached 10 −6 .The exception was CoV 3CL models, which were only trained for 25-50 epochs because of time constraints.The model with the best score on the validation set was evaluated on the test set.To account for the binding/non-binding class imbalance we used restrictive over-sampling.In this approach the under-represented class is sampled at an equal rate to the over-represented class.The average batch then contains half positive binders and half negative binders.One epoch samples N molecules, where N is the size of the dataset, but only about half of them are unique: the other half contains positives that are continually resampled.It therefore takes the model two epochs to see the entire dataset.In ChemProp an epoch contains all positives sampled once, with an equal number of negatives.It then takes the model n neg /n pos epochs to see all the data.Since ChemProp usually requires 30 epochs for convergence, we trained all ChemProp models for ≈ 30 n neg /n pos epochs, though the models usually converged far earlier.

D Hyperparameter optimization
Hyperparameters for ChemProp models were optimized using the defaults in the repository [32], which vary the dimension of the hidden state (300 to 2400), number of convolutions (2 to 6), dropout rate (0 to 0.4), and number of readout layers (1 to 3).For single conformer 3D models we optimized the dropout rate in the convolution layers (0 to 0.4) and readout layers (0 to 0.4), sampling the logarithm of the dropout rate uniformly.For multiple conformer species we also optimized the type of attention (linear or pair-wise), the dropout rate in the attention layer, and the number of attention heads.We used Bayesian hyperparameter optimization for each model [59], with 20 hyperparameter combinations for faster models and 5-10 for slower ones.
To choose the best model we evaluated the score on the validation set (not the test set).In most cases the PRC-AUC metric was used, for the following reason.Both the PRC and ROC involve recall (or true positive rate), the proportion of total hits correctly identified by the model.The PRC's second variable is precision, the proportion of identified hits that are actually hits, whereas the ROC's second variable is the false positive rate, the percentage of hits incorrectly identified as misses.The precision is more relevant to virtual screening, which is focused on maximizing the ratio of hits to misses in a small sample, rather than minimizing the number of hits that were mislabeled.An exception was made for CoV-2 3CL and CoV-2 ChemProp3D/ChemProp3D-NDU.Here the models learned slowly, and early epochs contained high PRC values, but also ROC values below 0.5 (worse than a random model).Hence the ROC was a more informative metric for hyperparameter optimization.
Random forest models were trained as regressors rather than classifiers, as the ROC and PRC scores do not make sense when the predictions themselves are binary.Any predictions greater than 1 were automatically set to 1, and any predictions less than 0 to 0, though all model predictions were already in the range (0, 1).Hyperparameters were also optimized with Bayesian methods [59].We used the ROC score on the validation set to evaluate hyperparameters, because it consistently gave the most robust models (i.e.those with both high ROC and PRC scores).Ten models were trained with different initial seeds, and balanced sampling was used during training.Details of the hyperparameter ranges can be found in [64], and further details about the optimization are in [51].

Figure 1 :
Figure 1: Molecular representations of the latanoprost molecule.top SMILES string.left Stereochemical formula with edge features, including wedges for in-and out-of-plane bonds, and a double line for cis isomerism.right Overlay of conformers.Higher transparency corresponds to lower statistical weight.

Figure 2 :
Figure 2: Schematic of the architecture in this work.(a) A neural network uses the 3D structure of the n th conformer, along with its graph information, to create a molecular fingerprint h (n) .(b) Each fingerprint is combined with an embedding of the conformer's statistical weight p (n) .The new fingerprints q (n) are aggregated through an attention mechanism to yield the final fingerprint Q.

Figure 3 :
Figure 3: Top 10 drug candidates for SARS-CoV-2 inhibition, as predicted by CP3D-NDU (1-C) with 10-fold averaging.Compounds are ordered by descending score from left to right and top to bottom.

Table 1 :
Number of hits (total number of species in parentheses) for each split of each dataset.

Table 3 :
Transfer learning results from CoV 3CL to CoV-2 3CL for different models, with and without additional message passing from ChemProp.

Table 3 ,
and can be compared with the ChemProp and random forest baselines in Table2.The CND model yields the best results, achieving the best ROC with 200 conformers, and the best PRC with one conformer.The improvement in the PRC is substantial, with 1-C achieving a score of 0.623, compared to ≈0.55 from ChemProp and random forest.However, the 200-C ROC is essentially the same as the random forest ROC.These improvements together are not enough to justify pre-training a 3D model with hundreds of thousands of species.A simple 2D model with Morgan fingerprints has competitive accuracy and is orders of magnitude faster.
1-C) model.Transfer learning can authors thank the XSEDE COVID-19 HPC Consortium, project CHE200039, for compute time.NASA Advanced Supercomputing (NAS) Division and LBNL National Energy Research Scientific Computing Center (NERSC), MIT Engaging cluster, Harvard Cannon cluster, and MIT Lincoln Lab Supercloud clusters are gratefully acknowledged for computational resources and support.The authors also thank Christopher E. Henze (NASA) and Shane Canon and Laurie Stephey (NERSC) for technical discussions and computational support, MIT AI Cures (https://www.aicures.mit.edu/) for molecular datasets and Wujie Wang, Daniel Schwalbe Koda, Shi Jun Ang (MIT DMSE) for scientific discussions and access to computer code.Financial support from DARPA (Award HR00111920025) and MIT-IBM Watson AI Lab is acknowledged.