Sunday, 22 March 2015

Literature pollution for dummies

<< previous || next >>   

So I thought that I’d conclude this mini-series ( 1 | 2 ) of PAINS posts with some lighter fare, the style of which is intended to be a bit closer to that of a PAINS-shaming post ( 1 | 2 ) than is normal for posts here.  As observed previously, the PAINS-shaming posts are vapid and formulaic although I have to admit that it’s always a giggle when people spit feathers on topics outside their applicability domains.   I must also concede that one of the PAINS-shaming posts was even cited in a recent article in ACS Medicinal Chemistry Letters although this citation might be regarded as more confirmation of the old adage that, ‘flattery will get you anywhere’ than indication of the scientific quality of the post.  I shouldn’t knock that post too much because it’s what goaded me into taking a more forensic look at the original PAINS article.  However, don’t worry if you get PAINS-shamed because, with acknowledgement to Denis Healey, being PAINS-shamed is “like being savaged by a dead sheep”

I should say something about the graphic with which I’ve illustrated this blog post.  It shows a diving Junkers Ju 87 Stuka and I’ll let aviation author William Green, writing in ‘Warplanes of the Third Reich’, tell you more about this iconic aircraft:

“The Ju 87 was an evil-looking machine, with something of the predatory bird in its ugly contours – its radiator bath and fixed, spatted undercarriage resembling gaping jaws and extended talons – and the psychological effect on the recipients of its attentions appeared almost as devastating as the bombs that it delivered with such accuracy.  It was an extremely sturdy warplane, with light controls, pleasant flying characteristics and a relatively high standard of manoeuvrability. It offered crew members good visibility and it was able to hit a target in a diving attack with an accuracy of less than 30 yards. All these were highly desirable characteristics but they tended to blind Ob.d.L. to the Ju 87’s shortcomings. Its use presupposed control of the air, for it was one of the most vulnerable of combat aircraft and the natural prey of the fighter…”

I really should get back on-topic because I doubt that Rudel ever had to worry about singlet oxygen while hunting T-34s on TheEastern Front.  I’ve promised to show you how to get away with polluting the literature so let’s suppose you’ve submitted a manuscript featuring PAINful structures and the reviewers have said, “Nein, es ist verboten”.  What should you do next? The quick answer is, “It depends”.  If the reviewers don't mention the orginal PAINS article and simply say that you’ve just not done enough experimental work to back up your claims then, basically, you’re screwed. This is probably a good time get your ego to take a cold shower and to find an unfussy open access journal that will dispense with the tiresome ritual of peer review and quite possibly include a package of downloads and citations for no additional APC.

Let’s look at another scenario, one in which the reviewers have stated that the manuscript is unacceptable simply because the compounds match substructures described in the original PAINS article.  This is the easiest situation to deal with although if you’re using AlphaScreen to study a protein-protein interaction you should probably consider the open access suggestion outlined above.  If not using AlphaScreen, you can launch your blitzkrieg although try not make a reviewer look like a complete idiot because the editor might replace him/her with another one who is a bit more alert.  You need to point out to the editor that the applicability domain (using this term will give your response a degree of authority) for the original PAINS filters is AlphaScreen used to assay protein-protein interactions and therefore the original PAINS filters are completely irrelevant to your submission.  You might also play the singlet oxygen card if you can find evidence (here’s a useful source for this type of information) for quenching/scavenging behavior by compounds that have aggrieved the reviewers on account of matching PAINS filters.

Now you might get a more diligent reviewer who looks beyond the published PAINS filters and digs up some real dirt on compounds that share a substructure with the compounds that you’ve used in your study and, when this happens, you need to put as much chemical space as you can between the two sets of compounds.  Let’s use Voss et al (I think that this was what one of the PAINS-shaming posts was trying to refer to) to illustrate the approach.  Voss et al describe some rhodanine-based TNF-alpha antagonists, the ‘activity’ of which turned out to be light-dependent and I would certainly regard this sort of light-dependency as very dirty indeed.  However, there are only four rhodanines described in this article (shown below) and each has a heteroaromatic ring linked to the exocyclic double bond (extended pi-system is highly relevant to photochemistry) and each is substituted with ethyl on the ring nitrogen.  Furthermore, that heteroaromatic ring is linked to (or fused with) either a benzene or heteroaromatic ring in each of the four compounds.  So here’s how you deal with the reviewers.  First point out that the bad behavior is only observed for four rhodanines assayed against a single target protein.  If your rhodanines lack the exocyclic double bond, you can deal with the reviewers without breaking sweat because the substructural context of the rhodanine ring is so different and you might also mention that your rhodanines can’t function as Michael acceptors.  You should also be able to wriggle off the hook if your rhodanines have the exocyclic double bond but only alkyl substituents on it. Sanitizing a phenyl substituent on the exocyclic double bond is a little more difficult and you should first stress that the bad behavior was only observed for rhodanines with, five-membered electron-rich heterocycles linked to that exocyclic double bond.  You’ll also be in a stronger position if your phenyl ring lacks the additional aryl or heteroaryl substituent (or ring fusion) that is conserved in the four rhodanines described by Voss et al because this can be argued to be relevant to photochemistry.

Things will be more difficult if you’ve got a heteroaromatic ring linked to the exocyclic bond and this is when you’ll need to reach into the bottom draw for appropriate counter-measures with which to neutralize those uncouth reviewers.  First take a close look at that heteroaromatic ring.  If it is six-membered and/or relatively electron-poor, consider drawing the editor’s attention to the important differences between your heteroaromatic ring and those of in the offending rhodanines of Voss et al.  The lack of aryl or heteroaryl substituents (or ring fusions) on your heteroaromatic ring will also strengthen your case so make sure editor knows.  Finally, consider calculating molecular similarity between your rhodanines and those in Voss et al. You want this to be as low as possible so experiment with different combinations of fingerprints and metrics (e.g. Tanimoto coefficient) to find whatever gives the best results (i.e. the lowest similarity). 

So this is good place at which to conclude this post and this series of posts.  I hope you’ve found it fun and have enjoyed learning how to get away with polluting the literature.      

Wednesday, 18 March 2015

Is the literature polluted by singlet oxygen quenchers and scavengers?

<<   previous || next  >> 

So apparently I’m a critic of the PAINS concept so maybe it’s a good idea to state my position.  Firstly, I don’t know exactly what is meant by ‘PAINS concept’ so, to be quite honest, it is difficult to know whether or not I am a critic. Secondly, I am fully aware that many compounds are observed as assay hits for any of a number of wrong reasons and completely agree that it is important to understand the pathological behavior of compounds in assays so that resource does not get burned unnecessarily. At the same time we need to think more clearly about different types of behavior in assays.  One behavior is that the compound does something unwholesome to a protein and, when this is the case, it is absolutely correct to say, ‘bad compound’ regardless of what it does (or doesn't) do to other proteins.  Another behavior is that the compound interferes with the assay but leaves the target protein untouched and, in this case, we should probably say ‘bad assay’ because the assay failed to conclude that the protein has emerged unscathed from its encounter with the compound. It is usually a sign of trouble when structurally-related compounds show activity in a large number of assays but there are potentially lessons to be learned by those prepared to look beyond hit rates. If the assays that are hit are diverse in type then we should be especially worried about the compounds.   If, however, the assays that are hit are of a single type then perhaps the specific assay type is of greater concern. Even when hit rates are low, appropriate analysis of the screening output may still reveal that something untoward is taking place. For example, a high proportion of hits in common may reflect that a mechanistic feature (e.g catalytic cysteine) is shared between two enzymes (e.g. PTP and cysteine protease)  

While I am certainly not critical of attempts to gain a greater understanding of screening output, I have certainly criticized over-interpretation of data in print ( 1 | 2 ) and will continue to do so.  In this spirit, I would challenge the assertion, made in the recent Nature PAINS article that “Most PAINS function as reactive chemicals rather than discriminating drugs” on the grounds that no evidence is presented to support it.  As noted in a previous post, the term ‘PAINS’ was introduced to describe compounds that showed frequent-hitter behavior in a panel of six AlphaScreen assays and this number of assays would have been considered a small number even two decades ago when some of my Zeneca colleagues (and presumably our opposite numbers elsewhere in Pharma) started looking at frequent-hitters. After reading the original PAINS article, I was left wondering why only six of 40+ screens were used in the analysis and exactly how these six screens had been selected.  The other point worth reiterating is that only including a single type of assay in analysis like this makes it impossible to explore the link between frequent-hitter behavior and assay type. Put another way, restricting analysis to a single assay type means that the results of the analysis constitute much weaker evidence that compounds interfere with other assay types or are doing something unpleasant to target proteins.

I must stress that I’m definitely not saying that the results presented in the original PAINS article are worthless. Knowledge of AlphaScreen frequent-hitters is certainly useful if you’re running this type of assay.  I must also stress that I’m definitely not claiming that AlphaScreen frequent hitters are benign compounds.  Many of the chemotypes flagged up as PAINS in that article look thoroughly nasty (although some, like catechols, look more ‘ADMET-nasty’ than ‘assay-nasty’).  However, the issue when analyzing screening output is not simply to be of the opinion that something looks nasty but to establish its nastiness (or otherwise) definitively in an objective manner.   

It’s now a good time to say something about AlphaScreen and there’s a helpful graphic in Figure 3 of the original PAINS article. Think of two beads held in proximity by the protein-protein interaction that you’re trying to disrupt.  The donor bead functions as a singlet oxygen generator when you zap it with a laser. Some of this singlet oxygen makes its way to the acceptor bead where its arrival is announced with the emission of light.  If you disrupt the protein-protein interaction then the beads are no longer in close proximity and the (unstable) singlet oxygen doesn’t have sufficient time to find an acceptor bead before it is quenched by solvent.  I realize this is a rushed explanation but I hope that you’ll be able to see that disruption of the protein-protein interaction will lead to a loss of signal because most of the singlet oxygen gets quenched before it can find an acceptor bead.

I’ve used this term ‘quench’ and I should say a bit more about what it means.  My understanding of the term is that it describes the process by which a compound in an excited state is returned to the ground state and it can be thought of as a physical rather than chemical process, even though intermolecular contact is presumably necessary.  The possibility of assay interference by singlet oxygen quenchers is certainly discussed in the original PAINS article and it was noted that:

“In the latter capacity, we also included DABCO, a strong singlet oxygen quencher which is devoid of a chromophore, and diazobenzene itself”

An apparent IC50 of 85 micromolar was observed for DABCO in AlphaScreen and that got me wondering about what the pH of the assay buffer might have been.  The singlet oxygen quenching abilities of DABCO have been observed in a number of non-aqueous solvents which suggests that the neutral form of DABCO is capable of quenching singlet oxygen.  While I don’t happen to know if protonated DABCO is also an effective quencher of singlet oxygen, I would expect (based on a pKa of 8.8) the concentration of the neutral form in an 85 micromolar solution of DABCO buffered at neutral pH to be about 1 micromolar.   Could this be telling us that quenching of singlet oxygen in AlphaScreen assays is possibly a bigger deal than we think?

Compounds can also react with singlet oxygen and, when they do so, the process is sometimes termed ‘scavenging’. If you just observe the singlet oxygen lifetimes, you can’t tell whether the singlet oxygen is returned harmlessly to its ground state or if a chemical reaction occurs.  Now if you read enough PAINS articles or PAINS-shaming blog posts, you’ll know that there is a high likelihood that, at some point, The Great Unwashed will be castigated for failing to take adequate notice of certain articles deemed to be of great importance by The Establishment.  In this spirit, I’d like to mention that compounds with sulfur doubly bonded to carbon have been reported ( 1 | 2 | 3 | 4 | 5 ) to quench or scavenge singlet oxygen and this may be relevant to the ‘activity’ of rhodanines in AlphaScreen assays.

The original PAINS article is a valuable compilation of chemotypes associated with frequent-hitter behavior in AlphaScreen assays although I have questioned whether or not this behavior represents strong evidence that compounds are doing unwholesome things to the target proteins.  It might be prudent to check the singlet oxygen quencher/scavenger literature a bit more carefully before invoking a high hit rate in a small panel of AlphaScreen assays in support of assertions that literature has been polluted or that somebody’s work is crap.  I’ll finish the post by asking whether tethering donor and acceptor beads covalently to each other might help identify compounds that interfere with AlphaScreen by taking out singlet oxygen. Stay tuned for the next blog post in which I’ll show you, with some help from Denis Healey and the Luftwaffe, how to pollute the literature (and get away with it).         

Friday, 6 March 2015

Free energy perturbation in lead optimization

Free energy simulation methods such as free energy perturbation (FEP) have been around for a while and, back in the late eighties when my Pharma career started, they were being touted for affinity prediction in drug discovery.  The methods never really caught on in the pharma/biotech industry and there are a number of reasons why this may have been the case including the compute-intensive nature of the calculations and the level of expertise required to run them.  This is not to say that nobody in pharma/biotech was using the methods. It’s just that the capability was not widely-perceived to give those who had it a clear advantage over their competitors.   Also there are other ways to use protein structural information in lead optimization and I’ve already written about the importance of forming molecular interactions with optimal binding geometry but without incurring conformational/steric energy penalties. Nevertheless, being able to predict affinity accurately would be high on every drug discovery scientist’s wish list.

A recently published study appears to represent a significant step forward and I decided to take a closer after seeing it Pipelined and reviewed.  The focus of the study is FEP and a number of innovations are described including an improved force field, enhanced sampling and automated work flow.  The quantity calculated in FEP is ΔΔG° which is a measure of relative binding affinity and this is typically what you want to predict in lead optimization.  We say ΔΔG° because it’s the difference between two ΔG° values which might, for example, be a compound with an unsubstituted phenyl ring and the corresponding compound with a chloro substituent at C3 of that aromatic ring. When we focus on ΔΔG we are effectively assuming that it is easier to predict differences in affinity than it is to predict affinity itself from molecular structure and this is a theme that I've touched on in a previous post.  Readers familiar with matched molecular pair analysis (MMPA 1 | 2 | 3 | 4 | 5 ) will see a parallel with FEP which I failed draw when first writing about MMPA although the point has been articulated in subsequent publications (1 | 2).  Of course FEP has been around a lot longer than MMPA so it’s actually much more appropriate to describe the latter as the data-analytic analog of the former.

As with MMPA, the rationale is that it is easier to predict differences in the values of a quantity than it is to predict values of the quantity directly from molecular structure.  The authors state:

 “In drug discovery lead optimization applications, the calculation of relative binding affinities (i.e., the relative difference in binding energy between two compounds) is generally the quantity of interest and is thought to afford significant reduction in computational effort as compared to absolute binding free energy calculations”

This study does appear to represent the state of the art although I would like to have seen the equivalent of Figure 3 (plot of FEP-predicted ΔG° versus experimental ΔG°) for the free energy differences which are the quantities that are actually calculated.  I would argue that Figure 3 is somewhat misleading because some of the variation in FEP-predicted ΔG° is explained by variation in the reference ΔG° values.   That said, the relevant information is summarized in Table S2 of the supporting information and the error distribution for the relative binding free energies (ΔΔG°) is shown in Figure S1.

One perception of FEP is that it becomes more difficult to get good results if the perturbation is large and the authors note:

“We find that our methodology is robust up to perturbations of approximately 10 heavy atoms”
Counting atoms is not the only way to gauge the magnitude of a perturbations.  It’d also be interested to see how robustly the methodology handles perturbations that involve changes in ionization state and whether ΔΔG°values of greater magnitude are more difficult to predict than those of smaller magnitude.  Prediction of affinity for compounds that bind covalently, but reversibly, to targets like cysteine proteases would probably also be feasible using these methods.   Something I've wondered about for a few years is what would happen if the aromatic nitrogen that frequently accepts a hydrogen bond from the tyrosine kinase hinge was mutated into an aromatic carbon.  If the resulting loss of affinity for this structural transformation was as small as some seem to believe it ought to be then it would certainly open up some 'patent space' in what is currently a bit of a log jam. You can also see how FEP might be integrated with MMPA in a lead optimization setting by using the former to predict the effects of structural modifications on affinity and the latter to assess the likely impact of of these modifications on ADME characteristics like solubility, permeability and metabolic stability.

So lots of possibilities and this is probably a good place to leave it for now.