Friday, 8 February 2019

The nature of ligand efficiency


"First they ignore you, then they laugh at you
then they fight you, then you win"
(Sometimes attributed to Mahatma Gandhi)

It's been a bit of a slog but the 'The nature of ligand efficiency' (NoLE) has just been published in Journal of Cheminformatics. As detailed in the previous post, the manuscript proved too spicy for two of the reviewers assigned by Journal of Medicinal Chemistry who did seem rather keen that the readers of that journal not be exposed to the meaninglessness of the ligand efficiency metric. One of the great things about preprints is that we no longer need to take shit from reviewers and, after spicing up the manuscript a bit more, I uploaded it as my first contribution to ChemRxiv. The exercise did teach me a number of lessons that should serve me well when I get round to writing 'The nature of lipophilic efficiency'...

Proofs for NoLE on my desk at Berwick-on-Sea in Blanchisseuse


I believe that NoLE may be the second scientific publication from the village of Blanchisseuse on the north coast of Trinidad (although I'll be happily be proven wrong on this point) and it follows an article in which I gave the editors of a number of ACS journals some unsolicited advice on assay interference. Sir Solomon Hochoy, who was Governor General of Trinidad and Tobago for ten years after independence, grew up in Blanchisseuse and had a house at the far end of the beach where I swim. I met him on the beach a couple of times before he passed away in 1983 and, after that, I would often chat with Lady Thelma who was an expert in the use of the cast net (she said that it was the only way that her cats would eat and, evidently, they were either very numerous or extremely large). I suspect that she would have been able to teach us a thing or two about high throughput screening although I never summoned the courage to ask her exactly how many cats she had.


Steps leading to the Hochoy house

This part of of Blanchisseuse has a long been associated with picong being given in print and my two articles merely follow an established tradition. The journalist BC Pires is married to one of our next door neighbors and he achieved international notoriety for a 1994 column in which he gently poked fun at the ears of a touring England fast bowler:

"He's a bowler with bat ears. How can he bowl fast with ears like that? The wind resistance must be a bitch to overcome, which probably accounts for the odd look of his first four steps. If he had his ears tucked, he'd outpace Ambrose. A less determined man would have opted to bowl spin. I'm amazed that he can walk head-on into the wind, far less run in at a seamer's speed. If, just as he reached the end of his run-up, he tripped at the bowling crease, he would probably glide to the other wicket. He could stump the batsman himself off his own delivery. He could deliver his delivery. He's a gentleman though; he's the only bowler I know who walks with his own sight screen attached to his head."

Needless to say BC was forced to make a grovelling apology and, if forced to make a grovelling apology for any of my scientific articles, I shall certainly consult my old schoolmate who is a (the?) world-leading expert and thought leader in the making of grovelling apologies.

"But Mr Caddick has demanded an apology and so I feel that I must, with utmost sincerity, say that I am very sorry indeed that Andy Caddick has big ears."

"Talk to me, Basil, I'm all ears" (with BC in Barbados)

Like both BC and I, my late father was taught by the Holy Ghost Fathers and the title of his final book was inspired by a rather bizarre episode that had taken place a couple of years before BC was accosted in the commentary box at Bourda by a fast bowler who is 20 cm taller than me (take another look at the photo above as you try to imagine that scene). For decades, a weather vane in the form of a sea serpent had sat harmlessly on the roof of the Red House which is currently being renovated but at the time, housed Parliament. Some years previously, the calypsonian Sugar Aloes had sung that the sea serpent (commonly believed to be a dragon) was a evil omen and, when the People's National Movement (PNM) were re-elected, it was decided that the dragon simply had to go.

The dragon was duly replaced with a white dove in a nocturnal operation that was personally supervised by the Minister for Works. The reason given for installing the dove at night was that they wanted to minimize disruption of traffic and this must be the only occasion on which a government in Trinidad and Tobago has been concerned about disrupting traffic. Although some considered the dove (with an olive branch in its beak) to be masterpiece, others were unconvinced. In particular, there was something that just didn't look right. It was my late father, Professor of Zoology at the University of the West Indies, who identified the problem in a letter to the Daily Express.

"Examination of your photograph shows an aerofoil of mixed parentage. The inner segment resembles that of any small bird with a generalised wing such as a dove a keskidee. The outer segment with its greater width is clearly that of a soaring bird such as a corbeau.

In flight a bird's tail feathers would trail and not be partly spread. If however it was landing the tail fan would be widely spread and the long axis of its body inclined sharply upward to the flight path. In flight, no bird would spread its legs in this way but would trail them. The bird in photograph is clearly not flying or landing normally. 

Zoologically the only circumstances under which this configuration of spread legs, partially spread tail fan and spread primaries is possible is when a bird defecates in flight."            

Masters of picong

No thought leaders, key opinions, cricketers or avifauna were harmed during the production of this blog post.

Sunday, 27 January 2019

Reviewing the reviewers


I recently published The Nature of Ligand Efficiency (NoLE) as a ChemRxiv preprint and this was featured (for all the right reasons) in a post at In The Pipeline. The material had been previously submitted to J Med Chem but it proved a bit too spicy for two of the three reviewers. I'll review the J Med Chem reviewers in this blog post and I hope that the feedback will be useful in the event of the journal being presented with similarly flavored material in the future. NoLE was my second publication from Berwick-on-Sea in the village of Blanchisseuse on the north coast of my native Trinidad and I'll include some photos from there to break up the text a bit.



Gate at Berwick-on-Sea in Blanchisseuse. The house was built (quite literally) by my late father (who would have been 89 today) and was named for my mother's home town of Berwick-upon-Tweed which has changed hands between England and Scotland on a number of occasions and may even still be at war with Imperial Russia.

The selection of reviewers for manuscripts that criticize previous studies presents a dilemma for journal editors. While it is prudent to consult those with a stake in what is being criticized, these may not the best people to ask about whether or not the criticism should be made. In particular, a reviewer using his/her position as a reviewer to suppress criticism of something in which he/she has a stake raises ethical questions. A stake in ligand efficiency (LE) could take any of a number of forms. First, one could have introduced a metric for LE. Second, one could have written articles endorsing ligand efficiency metrics or asserting their validity. Third, one could have enthusiastically promoted the LE metric at one's institution (e.g. by mandating that LE values be quoted when presenting project updates at the dog and pony shows that are an essential part of modern drug discovery). Fourth, one might be a devout member of the Fragment Cult (for whom the Doctrinal Correctness of LE is an Article of Faith).

There were three reviewers for my manuscript and I'll call them A, B and C since their numbers got scrambled between different rounds of review (also using the term 'Reviewer 3' might give some readers anxiety attacks). Reviewer A had nothing constructive to say and simply spat feathers. Reviewer B was very positive about the manuscript and made a number of  helpful suggestions. Reviewer C demanded that the manuscript be watered down to homeopathic levels (and that was never going to happen).

Here's my office at Berwick-on-Sea. That's a printout of NoLE on my desk (under the hanging beach towel).

The central theme of my manuscript is the argument that ligand efficiency is physically meaningless because perception of efficiency changes with the concentration unit in which affinity is expressed. This is actually a very serious criticism since since a change in perception resulting from a change in a unit would normally be regarded in physical science as an error in the "not even wrong" category.  It's not something that one can simply sweep under the carpet as a "limitation" of ligand efficiency. Despite their howls of protest, neither Reviewer A nor Reviewer C offered coherent counter-argument.

The tactic adopted by Reviewer C was to simply dismiss the physical arguments presented in the manuscript as "opinion" without presenting counter-argument. J Med Chem really does need to make it clear to reviewers that they need to do much better than this since it reflects badly on the journal.

Reviewer C. "'Physically meaningless' is at best an inflammatory opinion whereas the fact that other choices could have been made is often under-appreciated."
PWK. This criticism appears to be doctrinal rather than scientific and I note that Reviewer C has not offered counter-argument to the argument that LE is physically meaningless.


Here's a view of the Caribbean Sea. The 20 m drop from the gap in the vegetation is just as precipitous as you would expect although we've not (yet) lost any personnel or household pets over the edge.

Reviewer A struggled woefully with rudimentary physical chemistry throughout the review process and, given that I'd suggested a number of potential reviewers with the necessary expertise in molecular recognition and chemical thermodynamics, I was at a loss to understand why a reviewer who was so ill-equipped for the task at hand had been invited to review the manuscript.

Reviewer A. Reactions are considered to be spontaneous under standard conditions when the free energy is negative, but by changing the definition of C° in an arbitrary manner, any reaction can be said to be spontaneous or not. This is true in a trivial sense, but generations of researchers have found the concept of negative or positive free energies useful.
PWK. The flaw in this argument is that if you change the value of C° then you also change whether or not the reaction is spontaneous under the standard conditions. This is the basis of the law of mass action and it is also important to remember that KD values are not measured at single concentration. A chemical process (at constant temperature and pressure) by which the system changes from state A to state B will be spontaneous if DG[A®B]  is negative. Regardless of experiences of generations of researchers, medicinal chemists rarely (if ever) appear to use the sign of  D (e.g. for binding under assay conditions) when analyzing SAR or for making any other decisions.

This is the start to the path down to the lower deck

In one round of review, Reviewer C stated “I believe that it is incumbent on the author to argue that the choice of standard state used by medicinal chemists is not useful” and Reviewer A repeated the criticism in a subsequent round, noting that this was "the central problem with the manuscript". I thought this was a bit rich given that Reviewer A and Reviewer C had each accused me of using straw man tactics at different points in the review process. The more serious problem, however, is that we have two LE advocates each attempting to to transfer the burden of proof that (in science) one accepts as soon as one advocates that people take an action (e.g. use LE metrics). Reviewers A and C appeared to do this in order to evade their responsibility as reviewers to present counter-argument to the arguments in the manuscript. This would be like a thought leader (yes, there really are people who call themselves 'thought leaders') responding to criticism of a claim that AI was going to transform drug discovery by saying that it was incumbent on the critics to argue that AI was not useful. Imagine if they ran clinical trials like this?

At this point, Reviewer A did rather lose it and I was half expecting to have to fend off a counterattack by Steiner's division. Needless to say, the latest version of the manuscript now opens with "Ligand efficiency (LE) is, in essence, a good concept that is poorly served by a bad metric." and this can be considered the equivalent of a two-fingered gesture that is mistakenly attributed to the English and Welsh longbowmen at Agincourt.

Reviewer A. Dr. Kenny dodges this challenge by stating that the burden of proof should not be on him, but by arguing that LE is a “bad metric” despite its wide usage, he does in fact have to explain why free energy is also a “bad” concept. Not doing so makes the manuscript deeply misleading and therefore inappropriate for publication.
PWK. I only used the term “bad metric” in the conclusions where I wrote “Ligand efficiency is, in essence, a good concept served by a bad metric.” so it is incorrect to state that I have argued that LE is a “bad metric”. In any case, in the revised manuscript, I now question whether LE can accurately be described as a metric since neither its creators nor its advocates appear able (or willing) to say what it measures. Wide usage does not validate rules, guidelines or metrics and I note that, at one time, the prevailing view was that the sun orbited the earth. Once again, Reviewer A is making the serious error of assuming that everything that applies to free energy also applies to any function of free energy. The simple counter to Reviewer A’s challenge is that free energy is a state function and an integral part of the framework of thermodynamics. Although defined in terms of free energy, the LE metric is not is part of thermodynamics simply because it appears to require a privileged standard state.

I have occasionally stated that "useful is the last refuge of the scoundrel" and this tends to be misinterpreted as an assertion that utility of a model is unimportant. Nothing could actually be further from the truth and the statement is more a comment on the way that models can be 'validated' by simply labeling them as "useful". In some ways "useful" is analogous to the "God created it that way" statements that you will encounter if you are careless enough to become ensnared in arguments with Creationists. I should also point out that the manuscript did discuss the difficulties of demonstrating the utility of LE while neither A nor C presented any evidence (fervent belief does not usually constitute evidence in science) to support their assertion that the 1 M standard state is more useful than any other standard state.

Reviewer A appeared particularly aggrieved that one of The Great Unwashed should have the temerity to even question the value of LE and the toys were duly ejected from the pram. As my response below indicates, Reviewer A's comment is more what one might have expected from an inquisitor at a fifteenth century heresy trial than from an expert reviewer of a manuscript submitted to the premier medicinal chemistry journal. It is also worth pointing out that LE was touted as "useful" even as it was introduced in a 2004 letter to Drug Discovery Today and all three coauthors of that seminal contribution to the medicinal chemistry literature appeared to be blissfully unaware of the nontrivial dependency of their creation on the standard concentration. As such, I would argue that it would actually be a dereliction of duty not to question the utility of LE.

Reviewer A. Sixth, Dr. Kenny repeatedly questions the utility of LE; for example “The LE metric is claimed by advocates to be useful although it is rarely, if ever, shown to be predictive of pharmaceutically-relevant behavior” (p. 15) and “the LE metric is rarely, if ever, shown to be predictive of phenomena that are relevant to drug discovery” (p. 39).
PWK. This appears to be a doctrinal rather than scientific criticism.

Lower deck. I only swim from here if snorkeling because it's rocky.

Reviewer B was very positive about the "Molecular Size and Design Risk" section and made useful suggestions for its expansion. It's also worth mentioning that Derek quoted from this section in his post. However, Reviewer C suggested that the whole section be purged from the manuscript although it is possible that Reviewer C's underlying objective was to ensure that certain articles were not discussed. Reviewer C complained that my criticism of ref 48 was unfair although it may be that the reviewer considered ref 48 to be a liability (this post will give readers an idea why some LE advocates might consider ref 48 to be a liability). Another possibility is that the objection to criticism of ref 48 was actually a smokescreen and the real reason for suggesting that the section be purged was actually to avoid discussion of ref 45 (which might be considered to be an even greater liability by LE advocates).

Ref 58 and ref 59 are rare examples of articles that respond to criticism of LE and and a study such as NoLE really does need to discuss them (especially since both articles completely miss the point). The fundamental flaw that is common to both articles is that neither addresses the problems associated with the change in perception that results from using a different unit to express affinity. Reviewer C protested that it was gratuitous to single out ref 58 and even cited this 2014 post from Molecular Design in support of the charge that I was unfairly picking on ref 58. Reviewer C did seem rather rattled and also complained that I had quoted "non-scientific sections" of ref 59. I must confess to being unfamiliar with the concept that a scientific article can have non-scientific sections that can be declared off-limits for challenge. This was, perhaps, not Reviewer C's finest moment.

Reviewer A and Reviewer C both seemed rather keen that ref 94 not be discussed and they said that I should not be "attacking" fit quality (FQ) because it is rarely, if ever, used. I suspect the real reason was that both reviewers consider the metric (and ref 94) to be a significant liability from the LE perspective. I responded by noting that FQ had got its own box in the NRDD LE review and that ref 94 was cited in ref 58 (which asserts the validity of LE), suggesting that FQ may be of greater interest than Reviewer A and Reviewer C would have us believe. Another reason that Reviewer C might have preferred that the spotlight not be focused on FQ is that the discussion further exposes the illusion that fragments bind more efficiently than ligands of greater molecular size.

This is where I go swimming. It's a 5 minute walk from the house

So that concludes my review of the reviewers. I believe that the J Med Chem editors do need to think carefully about how (or even whether) they wish to have controversial topics addressed in their journal. Dr Eric Williams, the first Prime Minister of Trinidad and Tobago, suggested that his hearing impairment was an advantage in dealing with dissent because he could simply switch off his hearing aid. However, dealing with controversial topics in drug discovery might not be quite so simple. In particular, a journal needs to consider the potential vested interests of those from whom it seeks advice. For example, the Editors of a number of ACS journals may find it quite instructive to take a very close look at exactly how their journals came to endorse a frequent hitter model (trained on results from a panel of only six assays that all use the same readout) as a predictor of pan-assay interference...

I'll leave you with a selfie taken on the roof. A few minutes earlier I'd seen off a determined counter-attack by some jack spaniards (or should that be jacks spaniard?). Normally, I'd leave them alone but they were too close to where I needed to work. The technique is simple but its execution takes some nerve. First, arm yourself with a can of Baygon (don't forget to test it beforehand) and a broom. Second, with Baygon aimed, prod nest with broom. Third, spray a protective curtain of Baygon as the jack spaniards attack you (they are aggressive and they always attack). 

PWK one, jack spaniards nil 


Monday, 21 January 2019

Response to Pat Walters on ML in drug discovery

Thanks again for your response, Pat, and I’ll try to both clarify my previous comments and respond to the challenges that you’ve presented (my comments are in red italics).

In defining ML as “a relatively well-defined subfield of AI” I was simply attempting to establish the scope of the discussion. I wasn’t implying that every technique used to model relationships between chemical structure and physical or biological properties is ML or AI.

[As a general point, it may be helpful to say what differentiates ML from other methods (e.g. partial least squares) that have been used for decades for modeling multivariate data in drug discovery. Should CoMFA be regarded as ML? If not, why not?]

You make the assertion that ML may be better for classification than regression, but don't explain why: "I also have a suspicion that some of the ML approaches touted for drug design may be better suited for dealing with responses that are categorical (e.g. pIC50 > 6 ) rather than continuous (e.g. pIC50 = 6.6)"

[My suspicions are aroused when I see articles like this in which the authors say “QSAR” but use a categorical definition of activity. At very least, I think modelers do need to justify the application of categorical methods to continuous data rather than presenting it fait accompli. J Med Chem addresses the categorization of continuous data in section 8g of the guidelines for authors.]

In my experience, the choice of regression vs classification is often dictated by the data rather than the method. If you have a dataset with 3-fold error and one log of dynamic range, you probably shouldn’t be doing regression. If you have a dataset that spans a reasonable dynamic range and isn’t, as you point out, bunched up at the ends of the distribution, you may be able to build a regression model.

[The trend that one is likely to observe in such a data set is likely to be very weak and I would still generally start with regression analysis because this shows the weakness in the trend clearly. The 3-fold error doesn’t magically disappear when you transform the continuous data to make it categorical (it translates to uncertainty in the categorization). Categorization of a data set like this may be justified if the distribution of the data suggests that it is highly clustered.]

Your argument about the number of parameters is interesting: "One of my concerns with cheminformatic ML is that it is not always clear how many parameters have been used to build the models (I’m guessing that, sometimes, even the modelers don’t know) and one does need to account for numbers of parameters if claiming that one model has outperformed another."

I think this one is a bit more tricky than it appears. In classical QSAR, many people use a calculated LogP. Is this one parameter? There were scores of fragment contributions and dozens of fudge factors that went into the LogP calculation, how do we account for these? Then again, the LogP parameters aren't adjustable in the QSAR model. I need to ponder the parameter question and how it applies to ML models which use things like regularization and early stopping to prevent overfitting.

[I would say that logP, whether calculated or measured, is a descriptor, rather than a parameter, in the context of QSAR (and ML) and that the model-building process does not ‘see’ the ‘guts’ of the logP prediction. In a multiple linear regression model (like a classical Hansch QSAR) there will be a single parameter (e.g. a1*logP) associated with logP. However, models that are non-linear with respect to logP will have more than one parameter associated with logP (e.g. a1*logP + a2*logP^2). In some cases, the model may appear to have a huge number of parameters although this may be an illusion because some methods for modeling do not allow the parameters to be varied independently of each other during the fitting process. The term ‘degrees of freedom’ is used in classical regression analysis to denote the number of parameters in a model (I don’t know if there is an analogous term for ML models).

As noted in my original post, the number of parameters used by ML models is not usually accounted for. Provided that the model satisfies validation criteria, the number of parameters is effectively treated as irrelevant. My view is that, unless the number of fitting parameters can be accounted for, it is not valid to claim that one model has outperformed another.]

I’m not sure I understand your arguments regarding chemical space. You conclude with the statement: “It is typically difficult to perceive structural relationships between compounds using models based on generic molecular descriptors”.

[I wasn’t nearly as clear here as I should have been. I meant molecular descriptors that are continuous-valued and define the dimensions of a space. By “generic” I mean descriptors that are defined for any molecular structure which has advantages (generality) and disadvantages (difficult to interpret models).  SAR can be seen in terms of structural relationships (e.g. X is the aza-substituted analog of Y) between compounds and the affinity differences that correspond to those relationships. What I was getting at is that it is difficult to perceive SAR using generic molecular descriptors (as defined above).] 

Validation is a lot harder than it looks. Our datasets tend to contain a great deal of hidden bias. There is a great paper from the folks at Atomwise that goes into detail on this and provides some suggestions on how to measure this bias and to construct training and test sets that limit the bias.

[I completely agree that validation is a lot harder than it looks and there is plenty of scope for debate about the different causes of the difficulty. I get uncomfortable when people declare models to be validated according to (what they claim are) best practices and suggest that the models should be used for regulatory purposes. I seem to remember sending an email to the vice chair of the 2005 or 2007 CADD GRC suggesting a session on model validation although there was little interest at the time. At EuroQSAR 2010, I suggested to the panel that the scientific committee should consider model validation as a topic for EuroQSAR 2012. The panel got a bit distracted by another point and, after I was sufficiently uncouth as make the point again, one of the panel declared that validation was a solved problem.]

I have to disagree with the statement that starts your penultimate paragraph: “While I do not think that ML models are likely to have significant impact for prediction of activity against primary targets in drug discovery projects, they do have more potential for prediction of physicochemical properties and off-target activity (for which measured data are likely to be available for a wider range of chemotypes than is the case for the primary project targets).”

Lead optimization projects where we are optimizing potency against a primary target are often places where ML models can make a significant impact. Once we’re into a lead-opt effort, we typically have a large amount of high-quality data, and can often identify sets of molecules with a consistent binding mode. In many cases, we are interpolating rather than extrapolating. These are situations where an ML model can shine. In addition, we are never simply optimizing activity against a primary target. We are simultaneously optimizing multiple parameters. In a lead optimization program, an ML model can help you to predict whether the change you are making to optimize a PK liability will enable you to maintain the primary target activity. This said, your ML model will be limited by the dynamic range of the observed data. The ML model won't predict a single digit nM compound if it has only seen uM compounds.

[I see LO as a process of SAR exploration and would not generally expect an ML model to predict the effects on affinity of forming new interactions and scaffold hops. While I would be confident that the affinity data for an LO project could be modelled, I am much less confident that hat the models will be useful in design. My guess is that, in order to have significant impact in LO, models for prediction of affinity will need to be specific to the structural series that the LO team is working on. Simple models (e.g. plot of affinity against logP) can be useful for defining the trend in the data which, in turn, allows us to quantify the extent to which to which the affinity of a compound beats the trend in the data (this is discussed in more detail in the Nature of Ligand Efficiency which proved a bit too spicy for two of the J Med Chem reviewers). Put another way a series-specific model with a small number of parameters, may be more useful than model with many parameters that is (apparently) more predictive. I would argue that we’re searching for positive outliers in drug design. It can also be helpful to draw a distinction between prediction-driven design and hypothesis-driven design.]

In contrast, there are a couple of confounding factors that make it more difficult to use ML to predict things like off-target activity. In some (perhaps most) cases, the molecules known to bind to an off-target may look nothing like the molecules you’re working on. This can make it difficult to determine whether your molecules fall within the applicability domain of the model. In addition, the molecules that are active against the off-target may bind to a number of different sites in a number of different ways.

[My suggestion that ML approaches may be better suited for prediction of physical properties and off-target activity was primarily a statement that data is likely to be available for a wider range of chemotypes in these situations than would be the case for primary target. My preferred approach to assessing potential for off-target activity would actually be to search for known actives that were similar (substructural; fingerprint; pharmacophore; shape) to the compounds of interest. Generally, I would be wary of predictions made by a model that had not ‘seen’ anything like the compounds of interest.] 

At the end of the day, ML is one of many techniques that can enable us to make better decisions on drug discovery projects. Like any other computational tool used in drug discovery, it shouldn’t be treated as an oracle. We need to use these tools to augment, rather than replace, our understanding of the SAR.

[Agreed although I believe that ML advocates need be clearer about what ML can do that the older methods can’t do. However, I do not see ML methods augmenting our understanding of SAR because neither the models nor the descriptors can generally be interpreted in structural terms.]

Thursday, 17 January 2019

Thoughts on AI in Drug Discovery - A Practical View From the Trenches


I’ll be taking a look at machine learning (ML) in this post which was prompted by AI in Drug Discovery - A Practical View From the Trenches by Pat Walters in Practical Cheminformatics. Pat’s post appears to be triggered by Artificial Intelligence in Drug Design - The Storm Before the Calm? by Allan Jordan that was published as a viewpoint in ACS Medicinal Chemistry Letters. Some of what I said in the Nature of QSAR is relevant to what I’ll be saying in the current post and I'll also direct readers to Will CADD ever become relevant to drug discovery? by Ash at Curious Wavefunction.  Pat notes that Allan “fails to highlight specific problems or to define what he means by AI” and goes on to say that he prefers “to focus on machine learning (ML), a relatively well-defined subfield of AI”. Given that drug discovery scientists have been modeling activity and properties of compounds for decades now, some clarity would be welcome as to which of the methods used in the earlier work would fall under the ML umbrella.

While not denying the potential of AI and ML in drug design, I note that both are associated with a lot of hype and it would be an error to confuse skepticism about the hype with criticism of AI and ML. Nevertheless, there are some aspects of cheminformatic ML, such as chemical space coverage, that don't seem to get discussed quite as much as much as I think they should be and these are what the current post is focused on. I also have a suspicion that some of the ML approaches touted for drug design may be better suited for dealing with responses that are categorical (e.g. pIC50 > 6 ) rather than continuous (e.g. pIC50 = 6.6). When discussing ML in drug design, it can be useful to draw a distinction between 'direct applications' of ML (e.g. prediction of behavior of compounds) and 'indirect applications' of ML (e.g. synthesis planning; image analysis). This post is primarily concerned with direct applications of ML.

As has become customary, I’ve included some photos to break up the text a bit. These are all feature albatrosses and I took them on a 2009 visit to the South Island of New Zealand. Here's a live stream of nest at the Royal Albatross Centre on the Otago Peninsula.  

Spotted on Kaikoura whale watch

My comment on Pat’s post has just appeared so I’ll say pretty much what I said in that comment here. I would challenge the characterization of ML as “a relatively well-defined subfield of AI”. Typically, ML in cheminformatics focuses on (a) finding regions in descriptor space associated with particular chemical behaviors or (b) relating measures of chemical behavior to values of descriptors.  I would not automatically regard either of these activities as subfields of AI any more than I would regard Hansch QSAR, CoMFA, Free-Wilson Analysis, Matched Molecular Pair Analysis, Rule of 5 or PAINS filters as subfields of AI. I’m sure that there will be some cheminformatic ML models that can accurately be described as a subfield of AI but to tout each and every ML method as AI would be a form of hype.

At Royal Albatross Centre, Otago Peninsula.   

Pat states “In essence, machine learning can be thought of as ‘using patterns in data to label things’” and this could be taken as implying that ML models can only handle categorical responses. In drug design, the responses that we would like to predict using ML are typically continuous (e.g. IC50; aqueous solubility; permeability; fraction unbound; clearance; volume of distribution) and genuinely categorical data are rarely encountered in drug discovery projects. Nevertheless, it is common in drug discovery for continuous data to be made categorical (sometimes we say that the data has been binned). There are a number of reasons why this might not be such a great idea. First, binning continuous data throws away huge amounts of information. Second, binning continuous data distorts relationships between objects (e.g. a pIC50 activity threshold of 6 makes pIC50 = 6.1 appear to be more similar to pIC50 = 9 than to pIC50 = 5.9). Third, categorical analysis does not typically account for ordering (e.g. high | medium | low) of the categories. Fourth, one needs to show that the conclusions of analysis do not depend on how the continuous data has been categorized. The third and fourth issues are specifically addressed by the critique of Generation of a Set of Simple,Interpretable ADMET Rules of Thumb that was presented in Inflation of Correlation in the Pursuit of Drug-likeness.

Royal Albatross Centre, Otago Peninsula. 

Overfitting is always a concern when modelling multivariate data and the fit to the training data generally gets better when you use more parameters. One of my concerns with cheminformatic ML is that it is not always clear how many parameters have been used to build the models (I’m guessing that, sometimes, even the modelers don’t know) and one does need to account for numbers of parameters if claiming that one model has outperformed another. When building models from multivariate data, one also needs to account for relationships between the molecular descriptors that define the region(s) of chemical space occupied by the training set. In ‘traditional’ multivariate data analysis, it is assumed that relationships between descriptors are linear and modelers use principal component analysis (PCA) to determine the dimensions of the relevant regions of space. If relationships between descriptors are non-linear then life gets a lot more difficult. Another of my concerns with ML models is that it is not always clear how (or if) relationships between descriptors have been accounted for.

At Royal Albatross Centre, Otago Peninsula. 

Although an ML method may be generic and applicable to data from diverse sources, it is still useful to consider the characteristics of cheminformatic data that distinguish them from other types of data. As noted in Structure Modification in Chemical Databases, the molecular connection table (also known as the 2D molecular structure) is the defining data structure of cheminformatics. One characteristic of cheminformatic data is that is possible to make meaningful (and predictively useful) comparisons between structurally-related compounds and this provides a motivation for studying molecular similarity. In cheminformatic terms we can say that differences in chemical behavior can be perceived and modeled in terms of structural relationships between compounds. This can also be seen as a distance-geometric view of chemical space. Although this may sound a bit abstract, it’s actually how medicinal chemists tend to relate molecular structure to activity and properties (e.g. the bromo-substitution led to practically no improvement in potency but now it sticks like shit to the proverbial blanket in the plasma protein binding assay). This is also a useful framework for analysis of output from high-throughput screening (HTS) and design of screening libraries. It is typically difficult to perceive structural relationships between compounds using models based on generic molecular descriptors.

At Royal Albatross Centre, Otago Peninsula

I have been sufficiently uncouth as to suggest that many ‘global’ cheminformatic models may simply be ensembles of local models and this reflects a belief that training set compounds are often distributed unevenly in chemical space. As we move away from traditional Hansch QSAR to ML models, the molecular descriptors become more numerous (and less physical). When compounds are unevenly distributed in chemical space and molecular descriptors are numerous, it becomes unclear whether the descriptors are capturing the relevant physical chemistry or just organizing the compounds into groups of structurally related analogs. This is an important distinction and the following graphic (which does not feature an albatross) shows why. The graphic shows a simple plot of Y versus X and we want to use this to predict Y for X = 3.  If X is logP and Y is aqueous solubility then it would be reasonable to assume that X captures (at least some of) the physical chemistry and we would regard the prediction as an interpolation because X = 3 is pretty much at the center of this very simple chemical space. If X is simply partitioning the six compounds into two groups of structurally related analogs then making a prediction for X = 3 would represent an extrapolation. While this is clearly a very simple example, it does illustrate an issue that the cheminformatics community needs to take a bit more notice of.


Chemical space coverage is a key consideration for anyone using ML to predict activity and properties of for a series of structurally-related compounds. The term "Big Data" does tend to get over-used but being globally "big" is no guarantee that local regions of chemical space (e.g. the structural series that a medicinal chemistry team may be working on) are adequately covered. The difficulty for the chemists is that is they don't know whether their structural series is in a cluster in the training set space or in a hole. In cheminformatic terms, it is unclear whether or not the series that the medicinal chemistry team is working on lies within the applicability domain of the model.

Validation can lead to an optimistic view of model quality when training (and validation) sets are unevenly distributed in chemical space and I’ll ask you to have another look at Figure 1 and to think about what would happen if we did leave one out (LOO) cross validation. If we leave out any one of the data points from either group of in Figure 1, the two remaining data points ensure that the model is minimally affected. Similar problems can be encountered even when an external test set is used. My view is that training and test sets need to be selected to cover chemical space as evenly as possible in order to get a realistic assessment of model quality from the validation.  Put another way, ML modelers need to view the selection of training and test sets as a design problem in its own right.

At Royal Albatross Centre, Otago Peninsula

Given that Pat's post is billed as a practical view from the trenches, it may be worth saying something about some of the challenges of achieving genuine impact with ML models in real life drug design projects. Drug discovery is incremental in nature and a big part of the process is obtaining the data needed to make decisions as efficiently as possible. In order to have maximum impact on drug discovery, cheminformaticians will need to be involved how the data is obtained as well as analyzing the data.

Using an ML model is a data-hungry way to predict biological activity and, at the start of a project, the team is not usually awash with data. Molecular similarity searching, molecular shape matching and pharmacophore matching can deliver useful results using much less data than you would need for building a typical ML model while docking can be used even when there are no known ligands.

ML models that simply predict whether or not a compound will be "active" are unlikely to be of any value in lead optimization. Put another way, if you suggest to lead optimization chemists that they should make compound X rather than compound Y because it is more likely to have better than micromolar activity, they may think that you'd just stepped off the shuttle from the Planet Tharg. To be useful in lead optimization, a model for prediction of biological activity needs to predict pIC50 values (rather than whether or not pIC50 will exceed a threshold) and should be specific to the region of chemical space of interest to the lead optimization team. A model satisfying these requirements may well be more like the boring old QSAR that has been around for decades than the modern ML model. One difficulty that QSAR modelers have always faced when working on real life drug discovery projects is that key decisions have already been made by the time there is enough data with which to build a reliable model.

While I do not think that ML models are likely to have significant impact for prediction of activity against primary targets in drug discovery projects, they do have more potential for prediction of physicochemical properties and off-target activity (for which measured data are likely to be available for a wider range of chemotypes than is the case for the primary project targets). Furthermore, predictions for physicochemical properties and off-target activity don't usually need to be as accurate as predictions for activity against the primary target. Nevertheless, there will always be concerns about how effectively a model covers  relevant chemical space (e.g. structural series being optimized) and it may be safer to just get some measurements done. My advice to lead optimization chemists concerned about solubility would generally be to get measurements for three or four compounds spanning the lipophilicity range in the series and examine the response of aqueous solubility to lipophilicity.

I do have some thoughts on how cheminformatic models can be made more intelligent but this post is already too long so I'll need to discuss these in a future post. It's "até mais" from me (and the Royal Albatrosses of the South Island).



Friday, 30 November 2018

Ligand efficiency and fragment-to-lead optimizations


The third annual survey (F2L2017) of fragment-to-lead (F2L) optimizations was published last week. Given that it was the second survey (F2L2016) in this series, that prompted me to write 'The Nature of Ligand Efficiency' (NoLE), I thought that some comments would be in order. F2L2017 presents analysis of data that had been aggregated from all three surveys and I'll be focusing on the aspects of this analysis that relate to ligand efficiency (LE).

As noted in NoLE, perception of efficiency changes when affinity is expressed in different concentration units and I have argued that this is an undesirable feature for a quantity that is widely touted as useful for design. At very least, it does place a burden of proof on those who advocate the use of LE in design to either show that the change in perception of efficiency with concentration unit is not a problem or to justify their choice of the 1 M concentration unit. One difficulty that LE advocates face is that the nontrivial dependency of LE on the concentration unit only came to light a few years after LE was introduced as "a useful metric for lead selection" and, even now, some LE advocates appear to be in a state of denial. Put more bluntly, you weren't even aware that you were choosing the 1 M concentration unit when you started telling medicinal chemists that they should be using LE to do their jobs but you still want us to believe that you made the correct choice?

I'm assuming that the authors of F2L2017 would all claim familiarity with the fundamentals of physical chemistry and biophysics while some of the authors may even consider themselves to be experts in these areas. I'll put the following question to each of the authors of F2L2017: what would your reaction be to analysis showing that the space group for a crystal structure changed if the unit cell parameters were expressed using different units? I can also put things a bit more coarsely by noting that to examine the effect on perception of changing a unit is, when applicable, a most efficacious bullshit detector.

The analysis in F2L2017 that I'll focus on is the the comparison between fragment hits and leads. As I showed in NoLE, it is meaningless to compare LE values because LE has a nontrivial dependency on the concentration unit used to express affinity. LE advocates can of course declare themselves to be Experts (or even Thought Leaders) and invoke morality in support of their choice of the 1 M concentration unit. However, this is a risky tactic because physical science can't accommodate 'privileged' units and an insistence that quantities have to be expressed in specific units might be taken as evidence that one is not actually an Expert (at least not in physical science).

So let's take a look at what F2L2017 has to say about LE in the context of F2L optimizations.

"The distributions for fragment and lead LE have also remained reasonably constant. On average there is no significant change in LE between fragment and lead (ΔLE = 0.004, p ≈ 0.8). Figure 5A shows the distribution of ΔLE, which is approximately centered around zero, although interestingly there are more examples where LE increases from fragment to lead (40) than where a decrease is seen (25). Some caution is warranted when interpreting these data, as our minimum criterion for 100-fold potency improvement may have introduced some selection bias. Nevertheless, there is no clear evidence in this data set that LE changes systematically during fragment optimization. Although the average change in LE from fragment to lead is small, Figure 5B shows that the correlation between fragment and lead LE is modest (R2 = 0.22), with a mean absolute difference between fragment and lead LE of 0.08."

This might be a good point at which to remind the authors of F2L2017 about some of the more extravagant claims that have been made for LE. It has been asserted that “fragment hits typically possess high ‘ligand efficiency’ (binding affinity per heavy atom) and so are highly suitable for optimization into clinical candidates with good drug-like properties”.  It has also been claimed that "ligand efficiency validated fragment-based design".  However, the more important point is that it is completely meaningless to compare values of LE of hits and leads because you will come to different conclusions if you express affinity using a different concentration unit (see Table 2 in NoLE). It is also worth noting that expressing affinity in units of 1 M introduces selection bias just as does the requirement for 100-fold potency improvement. 

Had I been reviewing F2L2017, I'd have suggested that the authors might think a bit more carefully about exactly why they are analyzing differences between LE values for fragments and leads. A perspective on fragment library design (reviewed in this post) correctly stated that a general objective of optimization projects is “ensuring that any additional molecular weight and lipophilicity also produces an acceptable increase in affinity". If you're thinking along these lines then scaling the F2L potency increase by the corresponding increase in molecular size makes a lot more sense than comparing LE for the fragments and leads. This quantifies how efficiently (in terms of increased molecular size) the potency gains for the F2L project have been achieved. This is not a new idea and I'll direct readers toward a 2006 study in which it was noted that a tenfold increase in affinity corresponded to a mean increase in molecular weight of 64 Da (standard deviation = 18 Da) for 73 compound pairs from FBLD projects. This is how group efficiency (GE) works and I draw the attention of the two F2L2017 authors from Astex to a perceptive statement made by their colleagues that GE is “a more sensitive metric to define the quality of an added group than a comparison of the LE of the parent and newly formed compounds”.

The distinction between a difference in LE and a difference in affinity that has been scaled by a difference in molecular size becomes a whole lot clearer if you examine the relevant equations. Equation (1) defines the F2L LE difference and first thing that you'll notice is that is that it is algebraically more complex than equation (2). This is relevant because LE advocates often tout the simplicity of the LE metric. However, the more significant difference between the two is that the concentration that defines the standard state is present in equation (1) but absent in equation (2). This means that you get the same answer when you scale affinity difference by the corresponding molecular size difference regardless of the units in which you express affinity.


So let's see how things look if you're prepared to think beyond LE when assessing F2L optimizations. Here's a figure from NoLE in which I've plotted the change in affinity against the change in number of non-hydrogen atoms for the F2L optimizations surveyed in F2L2016. The molecular size efficiency for each optimization can be calculated by dividing the change in affinity by the change in in number of non-hydrogen atoms. I've drawn lines corresponding to minimum and maximum values of molecular size efficiency and have also shown the quartiles.

So now it's time to wrap things up. A physical quantity that is expressed in a different unit is still the same physical quantity and I presume that all the authors of F2L2017 would have been aware of this while they were still undergraduates. LE was described as thermodynamically indefensible in comments on Derek's post on NoLE and choosing to defend an indefensible position usually ends in tears (just as it did for the French at Dien Bien Phu in 1954). The dilemma facing those who seek to lead opinion in FBDD is that to embrace the view that the 1 M concentration unit is somehow privileged requires that they abandon fundamental physicochemical principles that they would have learned as undergraduates.   

Sunday, 14 October 2018

A PAINful itch

I've been meaning to take a look at the Seven Year Itch (SYI) article on PAINS for some time. SYI looks back over the the preceding 7 years of PAINS while presenting a view of future directions. One general comment that I would make of SYI is that it appears to try to counter criticisms of PAINS filters without explicitly acknowledging these criticisms.

This will a long post and strong coffee may be required. Before starting, it must be stressed that I neither deny that assay interference is a significant problem nor do I assert that compounds identified by PAINS filters are benign. The essence of my criticism of much of the PAINS analysis is that the rhetoric is simply not supported by the data.  It has always been easy to opine that chemical structures look unwholesome but it has always been rather more difficult to demonstrate that compounds are behaving pathologically in assays. One observation that I would make about modern drug discovery is that fact and opinion often become entangled to the extent that those who express (and seek to influence) opinions are no longer capable of distinguishing what they know from what they believe.

I've included some photos to break up the text a bit and these are from a 2016 visit to the north of Vietnam.  I'll start with this one taken from the western shore of Hoan Kiem Lake the night after the supermoon.

Hanoi moon

I found SYI to be something of a propaganda piece with all the coherence of a six-hour Fidel Castro harangue. As is typical for articles in the PAINS literature, SYI is heavy in speculation and opinion but is considerably lighter in facts and measured data. It wastes little time in letting readers know how many times the original PAINS article was cited. One criticism that I have made about the original PAINS article (that also applies to SYI and the articles in between) is that the article neither defines the term PAINS (other than to expand the acronym) nor does it provide objective criteria by which a compound can be shown experimentally to be (or not to be) a PAINS (or is that a PAIN). An 'unofficial' definition for the term PAINS has actually been published and I think that it's pretty good:

"PAINS, or pan-assay interference compounds, are compounds that have been observed to show activity in multiple types of assays by interfering with the assay readout rather than through specific compound/target interactions."

While PAINS purists might denounce the creators of the  unofficial PAINS definition for heresy and unspecified doctrinal errors, I would argue that the unofficial definition is more useful than the official definition (PAINS are pan-assay interference compounds). I would also point out that some of those who introduced the unofficial definition actually use experiments to study assay interference when much of the official PAINSology (or should that be PAINSomics) consists of speculation about the causes of  frequent-hitter behavior. One question that I shall put to you, the reader, is how often, when reading an article on PAINS, do you see real examples of experimental studies that have clearly demonstrated that specific compounds exhibit pan-assay interference?

Restored bunker and barbed wire at Strongpoint Béatrice which was the first to fall to the Viet Minh.

Although the reception of PAINS filters has generally been positive, JCIM has published two articles (the first by an Associate Editor of that journal and the second by me) that examine the PAINS filters critically from a cheminformatic perspective. The basis of the criticism is that the PAINS filters are predictors of frequent hitter behavior for assays using an AlphaScreen readout and they have been developed using proprietary data. It's a quite a leap from frequent-hitter behavior when tested at single concentrations in a panel of six AlphaScreen assays to pan-assay interference. In the language of cheminfomatics, we can state that the PAINS filters have been extrapolated out of a narrow applicability domain  and they have been reported (ref and ref) to be less predictive of frequent-hitter behavior in these situations. One point that I specifically made was that a panel of six assays all using the same readout is a suboptimal design of an experiment to detect and quantify pan-assay interference.

In my article, bad behavior in assays was classified as Type 1 ( assay result gives an incorrect indication of the extent to which the compound affects the function of the target) or Type 2 (compounds affect target function by an undesirable mechanism of action). I used these rather bland labels because I didn't want to become ensnared in a Dien Bien Phu of nomenclature and it must be stressed that there is absolutely no suggestion that other people use these labels. My own preference would actually be to only use the term interference for Type 1 bad behavior and it's worth remembering that Type 1 bad behavior can also lead to false negatives.

The distinction between Type 1 and and Type 2 behaviors is an important and useful one to make from the perspective of drug discovery scientists who are making decisions as to which screening hits to take forward. Type 1 behavior is undesirable because it means that you can't believe the screening result for hits but, provided that you can find an assay (e.g. label-free measurement of affinity) that is not interfered with, Type 1 behavior is a manageable, although irksome, problem. Running a second assay that uses an orthogonal readout may shed light on whether Type 1 behavior is an issue although, in some cases, it may be possible to assess, and even correct for, interference without running the orthogonal second assay. Type 2 behavior is a much more serious problem and a compound that exhibits Type 2 behavior needs to be put out of its misery as swiftly and mercifully as possible. The challenge presented by Type 2 behavior is that you need to establish the mechanism of action simply to determine whether or not it is desirable. Running a second assay with an orthogonal readout is unlikely to provide useful information since the effect on target function is real.

Barbed wire at Strongpoint Béatrice. I'm guessing that it was not far from here that, on the night of 13th/14th March, 1954, Captain Riès would have made the final transmission: "It's all over - the Viets are here. Fire on my position. Out."

Most (all?) of the PAINSology before SYI failed to make any distinction between Type 1 and Type 2 bad behavior. SYI states "There does not seem to be an industry-accepted nomenclature or ontology of anomalous binding behavior" and makes some suggestions as to how this state of affairs might be rectified. SYI recommends that "Actives" be first classified as "target modulators" or "readout modulators". The "target modulators" are all considered to be "true positives" and these are further classified as "true hits" or "false hits". All the "readout modulators" are labelled as "false positives". Unsurprisingly, the authors recommend that all the "false hits" and "false positives" be labelled as pan-assay interference compounds regardless of whether the compounds in question actually exhibit pan-assay interference. In general, I would advise against drawing a distinction between the terms "hit" and "positive" in the context of screening but, if you chose to do so, then you do really do need to define the terms much more precisely than the authors have done.

I think the term "readout modulator" is reasonable and is equivalent to my definition of Type 1 behavior (assay result gives an incorrect indication of the extent to which the compound affects the function of the target). However, I strongly disagree with the classification of compounds showing "non-specific interaction with target leading to active readout" as readout modulators since I'd regard any interaction with the target that affects its function to be modulation. My understanding is that the effects of colloidal aggregators on protein function are real (although not exploitable) and that it is often possible to observe reproducible concentration responses. My advice to the authors is that, if you're going to appropriate colloidal aggregators as PAINS, then you might at least put them in the right category.

While the term "target modulator" is also reasonable, it might not be a such great idea to use it in connection with assay interference since it's also quite a good description of a drug. Consider the possibility of homeopaths and anti-vaxxers denouncing the pharmaceutical industry for poisoning people with target modulators. However, I disagree with the use of the term "false hit" since the modulation of the target is real even when the mechanism of action is not exploitable. There is also a danger of confusing the "false hits" with the "false positives" and SYI is not exactly clear about the distinction between a "hit" and a "positive". In screening both terms tend to be used to specify results for which the readout exceeds a threshold value.

The defensive positions on one of the hills of Strongpoint Béatrice have not been restored. Although the trenches have filled in with time, they are not always as shallow as they appear to be in this photo (as I discovered when I stepped off the path).

It's now time to examine what SYI has to to say and singlet oxygen is as good a place as any to start from. One criticism of PAINS filters that I have made, both in my article and the Molecular Design blog, is that some of the frequent-hitter behavior in the PAINS assay panel may be due to quenching or scavenging of singlet oxygen which is an essential component of the the AlphaScreen readout. SYI states:

"However, while many PAINS classes contain some member compounds that registered as hits in all the assays analyzed and that therefore could be AlphaScreen-specific signal interference compounds, most compounds in such classes signal in only a portion of assays. For these, chemical reactivity that is only induced in some assays is a plausible mechanism for platform-independent assay interference."

The authors seem to be interpreting the observation that a compound only hits in a portion of assays as evidence for platform-independent assay interference. This is actually a very naive argument for a number of reasons. First, compounds do not all appear to have been assayed at the same concentration in the original PAINS assay panel and there may be other sources of variation that were not disclosed. Second, different readout thresholds may have been used for the assays in the panel and noise in the readout introduces a probabilistic element to whether or not the signal for a compound exceeds the threshold. Last, but definitely not least, the molecular structure of a compound does influence the efficiency with which it quenches or scavenges singlet oxygen. A recent study observed that PAINS "alerts appear to encode primarily AlphaScreen promiscuous molecules"

If you read enough PAINS literature, you'll invariably come across sweeping generalizations made about PAINS. For example, it has been claimed that "Most PAINS function as reactive chemicals rather than discriminating drugs." SYI follows this pattern and asserts:

"Another comment we frequently encounter and very relevant to this journal is that PAINS may not be appropriate for drug development but may still comprise useful tool compounds. This is not so, as tool compounds need to be much more pharmacologically precise in order that the biological responses they invoke can be unambiguously interpreted."

While it is encouraging that the authors have finally realized the significance of the distinction between readout modulators and target modulators, they don't seem to be fully aware of the implications of making this distinction. Specifically, one can no longer make the sweeping generalizations about PAINS that are common in PAINS literature. Consider a hypothetical compound that is an efficient quencher of singlet oxygen and that has shown up as a hit in all six AlphaScreen assays of the original PAINS assay panel. While many would consider this compound to be a PAINS (or PAIN), I would strongly challenge a claim that observation of frequent-hitter behavior in this assay panel would be sufficient to rule out the use of the compound as a tool.

SYI notes that PAINS are recognized by other independently developed promiscuity filters.

"The corroboration of PAINS classes by such independent efforts provides strong support for the structural filters and subsequent recognition and awareness of poorly performing compound classes in the literature. It is instructive therefore to introduce two more recent and fully statistically validated frequent-hitter analytical methods that are assay platform-independent. The first was reported in 2014 by AstraZeneca(16) and the second in 2016 by academic researchers and called Badapple.(27)"

I don't think it is particularly surprising (or significant) that some of the PAINS classes are recognized as frequent-hitters by other models for frequent-hitter behavior. What is not clear is how many of the PAINS classes are recognized by the other frequent-hitter models or how 'strong' the recognition is. I would challenge the description of the AstraZeneca frequent-hitter model as "fully statistically validated" since validation was performed using proprietary data. I made a similar criticism of the original PAINS study and would suggest that the authors take a look at what this JCIM editorial has to say about the use of proprietary data in modeling studies.       

The French named this place Eliane and it was quieter when I visited than it would have been on 6th May, 1954 when the Viet Minh detonated a large mine beneath the French positions. It has been said that the alphabetically-ordered (Anne-Marie to Isabelle) strongpoints at Dien Bien Phu were named for the mistresses of the commander, Colonel (later General) Christian de Castries although this is unlikely.

SYI summarizes as follows:

"In summary, we have previously discussed a variety of issues key to interpretation of PAINS filter outputs, ranging from HTS library design and screening concentration, relevance of PAINS-bearing FDA-approved drugs, issues in SMARTS to SLN conversion, the reality of nonfrequent hitter PAINS, as well as PAINS and non-PAINS that are respectively not recognized or recognized in the PAINS filters as originally published. However, nowhere has a discussion around these key principles been summarized in one article, and that is the point of the current article. Had this been the case, we believe some recent contributions to the literature would have been more thoughtfully directed. (21,32)"

I must confess that reference to the reality of nonfrequent hitter pan assay interference compounds would normally prompt me to advise authors to stay off the peyote until the manuscript has been safely submitted. However, the bigger problem embedded in the somewhat Rumsfeldesque first sentence is that you need objective and unambiguous criteria by which compounds can be determined to be PAINS or non-PAINS before you can talk about "key principles". You also need to acknowledge that interference with readout and and undesirable mechanisms of action are entirely different problems requiring entirely different solutions.

I noted that recent contributions to the literature from me and from a JCIM Associate Editor (who might know a bit more about cheminformatics than the authors) were criticized for being insufficiently thoughtful. To be criticized in this manner is, as the late, great Denis Healey might have observed, "like being savaged by a dead sheep". Despite what the authors believe, I can confirm that my contribution to the literature would have been very similar even if SYI had been published beforehand. Nevertheless, I would suggest to the authors that dismissing the feedback from a JCIM Associate Editor as if he were a disobedient schoolboy might not have been such a smart move. For example, it could get the JMC editors wondering a bit more about exactly what they'd got themselves into when they decided to endorse a frequent-hitter model as a predictor of pan-assay interference. The endorsement of a predictive model by a premier scientific journal represents a huge benefit to the creators of the model but the flip side is that it also represents a huge risk to the journal. 

So that's all that I want to say about PAINS and it's a  good point to wrap things up so that I can return to Vietnam for the remainder of the post.       

I'm pretty sure that neither General Giap nor General de Castries visited the summit of Fansipan which at 3143 meters is the highest point in Vietnam (I wouldn't have either had a cable car had not been installed a few months before I visited). It's a great place to enjoy the sunset.

Back in Hanoi, I attempted to pay my respects to Uncle Ho, as I've done on two previous visits to this city, but timing was not great (they were doing the annual formaldehyde change). Uncle Ho is in much better shape than Chairman Mao who is actually seven years 'younger' and this is a consequence of having been embalmed by the Russians (the acknowledged experts in this field). Chairman Mao had the misfortune to expire when Sino-Soviet relations were particularly frosty and his pickling was left to some of his less expert fellow citizens. It is also said that the Russian embalming team arrived in Hanoi before Uncle Ho had actually expired...

Catching up with Uncle Ho