Monday 21 January 2019

Response to Pat Walters on ML in drug discovery

Thanks again for your response, Pat, and I’ll try to both clarify my previous comments and respond to the challenges that you’ve presented (my comments are in red italics).

In defining ML as “a relatively well-defined subfield of AI” I was simply attempting to establish the scope of the discussion. I wasn’t implying that every technique used to model relationships between chemical structure and physical or biological properties is ML or AI.

[As a general point, it may be helpful to say what differentiates ML from other methods (e.g. partial least squares) that have been used for decades for modeling multivariate data in drug discovery. Should CoMFA be regarded as ML? If not, why not?]

You make the assertion that ML may be better for classification than regression, but don't explain why: "I also have a suspicion that some of the ML approaches touted for drug design may be better suited for dealing with responses that are categorical (e.g. pIC50 > 6 ) rather than continuous (e.g. pIC50 = 6.6)"

[My suspicions are aroused when I see articles like this in which the authors say “QSAR” but use a categorical definition of activity. At very least, I think modelers do need to justify the application of categorical methods to continuous data rather than presenting it fait accompli. J Med Chem addresses the categorization of continuous data in section 8g of the guidelines for authors.]

In my experience, the choice of regression vs classification is often dictated by the data rather than the method. If you have a dataset with 3-fold error and one log of dynamic range, you probably shouldn’t be doing regression. If you have a dataset that spans a reasonable dynamic range and isn’t, as you point out, bunched up at the ends of the distribution, you may be able to build a regression model.

[The trend that one is likely to observe in such a data set is likely to be very weak and I would still generally start with regression analysis because this shows the weakness in the trend clearly. The 3-fold error doesn’t magically disappear when you transform the continuous data to make it categorical (it translates to uncertainty in the categorization). Categorization of a data set like this may be justified if the distribution of the data suggests that it is highly clustered.]

Your argument about the number of parameters is interesting: "One of my concerns with cheminformatic ML is that it is not always clear how many parameters have been used to build the models (I’m guessing that, sometimes, even the modelers don’t know) and one does need to account for numbers of parameters if claiming that one model has outperformed another."

I think this one is a bit more tricky than it appears. In classical QSAR, many people use a calculated LogP. Is this one parameter? There were scores of fragment contributions and dozens of fudge factors that went into the LogP calculation, how do we account for these? Then again, the LogP parameters aren't adjustable in the QSAR model. I need to ponder the parameter question and how it applies to ML models which use things like regularization and early stopping to prevent overfitting.

[I would say that logP, whether calculated or measured, is a descriptor, rather than a parameter, in the context of QSAR (and ML) and that the model-building process does not ‘see’ the ‘guts’ of the logP prediction. In a multiple linear regression model (like a classical Hansch QSAR) there will be a single parameter (e.g. a1*logP) associated with logP. However, models that are non-linear with respect to logP will have more than one parameter associated with logP (e.g. a1*logP + a2*logP^2). In some cases, the model may appear to have a huge number of parameters although this may be an illusion because some methods for modeling do not allow the parameters to be varied independently of each other during the fitting process. The term ‘degrees of freedom’ is used in classical regression analysis to denote the number of parameters in a model (I don’t know if there is an analogous term for ML models).

As noted in my original post, the number of parameters used by ML models is not usually accounted for. Provided that the model satisfies validation criteria, the number of parameters is effectively treated as irrelevant. My view is that, unless the number of fitting parameters can be accounted for, it is not valid to claim that one model has outperformed another.]

I’m not sure I understand your arguments regarding chemical space. You conclude with the statement: “It is typically difficult to perceive structural relationships between compounds using models based on generic molecular descriptors”.

[I wasn’t nearly as clear here as I should have been. I meant molecular descriptors that are continuous-valued and define the dimensions of a space. By “generic” I mean descriptors that are defined for any molecular structure which has advantages (generality) and disadvantages (difficult to interpret models).  SAR can be seen in terms of structural relationships (e.g. X is the aza-substituted analog of Y) between compounds and the affinity differences that correspond to those relationships. What I was getting at is that it is difficult to perceive SAR using generic molecular descriptors (as defined above).] 

Validation is a lot harder than it looks. Our datasets tend to contain a great deal of hidden bias. There is a great paper from the folks at Atomwise that goes into detail on this and provides some suggestions on how to measure this bias and to construct training and test sets that limit the bias.

[I completely agree that validation is a lot harder than it looks and there is plenty of scope for debate about the different causes of the difficulty. I get uncomfortable when people declare models to be validated according to (what they claim are) best practices and suggest that the models should be used for regulatory purposes. I seem to remember sending an email to the vice chair of the 2005 or 2007 CADD GRC suggesting a session on model validation although there was little interest at the time. At EuroQSAR 2010, I suggested to the panel that the scientific committee should consider model validation as a topic for EuroQSAR 2012. The panel got a bit distracted by another point and, after I was sufficiently uncouth as make the point again, one of the panel declared that validation was a solved problem.]

I have to disagree with the statement that starts your penultimate paragraph: “While I do not think that ML models are likely to have significant impact for prediction of activity against primary targets in drug discovery projects, they do have more potential for prediction of physicochemical properties and off-target activity (for which measured data are likely to be available for a wider range of chemotypes than is the case for the primary project targets).”

Lead optimization projects where we are optimizing potency against a primary target are often places where ML models can make a significant impact. Once we’re into a lead-opt effort, we typically have a large amount of high-quality data, and can often identify sets of molecules with a consistent binding mode. In many cases, we are interpolating rather than extrapolating. These are situations where an ML model can shine. In addition, we are never simply optimizing activity against a primary target. We are simultaneously optimizing multiple parameters. In a lead optimization program, an ML model can help you to predict whether the change you are making to optimize a PK liability will enable you to maintain the primary target activity. This said, your ML model will be limited by the dynamic range of the observed data. The ML model won't predict a single digit nM compound if it has only seen uM compounds.

[I see LO as a process of SAR exploration and would not generally expect an ML model to predict the effects on affinity of forming new interactions and scaffold hops. While I would be confident that the affinity data for an LO project could be modelled, I am much less confident that hat the models will be useful in design. My guess is that, in order to have significant impact in LO, models for prediction of affinity will need to be specific to the structural series that the LO team is working on. Simple models (e.g. plot of affinity against logP) can be useful for defining the trend in the data which, in turn, allows us to quantify the extent to which to which the affinity of a compound beats the trend in the data (this is discussed in more detail in the Nature of Ligand Efficiency which proved a bit too spicy for two of the J Med Chem reviewers). Put another way a series-specific model with a small number of parameters, may be more useful than model with many parameters that is (apparently) more predictive. I would argue that we’re searching for positive outliers in drug design. It can also be helpful to draw a distinction between prediction-driven design and hypothesis-driven design.]

In contrast, there are a couple of confounding factors that make it more difficult to use ML to predict things like off-target activity. In some (perhaps most) cases, the molecules known to bind to an off-target may look nothing like the molecules you’re working on. This can make it difficult to determine whether your molecules fall within the applicability domain of the model. In addition, the molecules that are active against the off-target may bind to a number of different sites in a number of different ways.

[My suggestion that ML approaches may be better suited for prediction of physical properties and off-target activity was primarily a statement that data is likely to be available for a wider range of chemotypes in these situations than would be the case for primary target. My preferred approach to assessing potential for off-target activity would actually be to search for known actives that were similar (substructural; fingerprint; pharmacophore; shape) to the compounds of interest. Generally, I would be wary of predictions made by a model that had not ‘seen’ anything like the compounds of interest.] 

At the end of the day, ML is one of many techniques that can enable us to make better decisions on drug discovery projects. Like any other computational tool used in drug discovery, it shouldn’t be treated as an oracle. We need to use these tools to augment, rather than replace, our understanding of the SAR.

[Agreed although I believe that ML advocates need be clearer about what ML can do that the older methods can’t do. However, I do not see ML methods augmenting our understanding of SAR because neither the models nor the descriptors can generally be interpreted in structural terms.]

No comments: