As most readers of this blog will know, analysis of large (often proprietary) data sets is very much a part of modern drug discovery. Some will have discerned a tendency for the importance of these studies to get 'talked up' and the proprietary nature of many of the data sets makes it difficult to challenge published claims. There are two ways in which data analysis studies in the drug discovery literature get 'talked up'. Firstly, trends in data are made to look stronger than they actually are and this has been discussed. Secondly, it may be suggested that the applicability domain for an analysis is broader than it actually is.
So it's back to PAINS with the fifth installment in the series ( 1 | 2 | 3 | 4 ) and, if you've found reading these posts tedious, spare a thought for the unfortunate person who has to write them. In the two posts on PAINS that will follow this one, I'll explore how PAINS have become integrated into journal guidelines for authors before concluding the series with some suggestions about how we might move things forward. But before doing this, I do need to take another look at the Nature PAINS article (Chemical con artists foil drug discovery) that was discussed in the first post of the series. I will refer this article as BW2014 in this post. I'll use the term 'pathological' as a catch all term in this post to describe any behavior by compounds in assays that results in an inappropriate assessment of the activity of those compounds.
BW2014 received a somewhat genuflectory review in a Practical Fragments post. You can see from the comments on the post that I was becoming uneasy about the size and 'homogeneity' of the PAINS assay panel although it was a rather intemperate PAINS-shaming post a couple of months later that goaded me into taking a more forensic look at the field. I'd like to get a few things straight before I get going. It has been known from the mid-1990s that not all high-throughput screening (HTS) output smells of roses and the challenge has been establishing by experiment that suspect compounds are indeed behaving pathologically. When working up HTS output, we typically have to make decisions based on incomplete information. One question that I'd like you think about is how would knowing that a catechol matched a PAINS substructure change your perception of that compound as a hit from HTS?
So before I go on it is perhaps a good idea to say what is meant the term 'PAINS' which is an acronym for Pan Assay INterference compoundS. In the literature and blogs, the term 'PAINS' appears to mean one of the following:
So it's back to PAINS with the fifth installment in the series ( 1 | 2 | 3 | 4 ) and, if you've found reading these posts tedious, spare a thought for the unfortunate person who has to write them. In the two posts on PAINS that will follow this one, I'll explore how PAINS have become integrated into journal guidelines for authors before concluding the series with some suggestions about how we might move things forward. But before doing this, I do need to take another look at the Nature PAINS article (Chemical con artists foil drug discovery) that was discussed in the first post of the series. I will refer this article as BW2014 in this post. I'll use the term 'pathological' as a catch all term in this post to describe any behavior by compounds in assays that results in an inappropriate assessment of the activity of those compounds.
BW2014 received a somewhat genuflectory review in a Practical Fragments post. You can see from the comments on the post that I was becoming uneasy about the size and 'homogeneity' of the PAINS assay panel although it was a rather intemperate PAINS-shaming post a couple of months later that goaded me into taking a more forensic look at the field. I'd like to get a few things straight before I get going. It has been known from the mid-1990s that not all high-throughput screening (HTS) output smells of roses and the challenge has been establishing by experiment that suspect compounds are indeed behaving pathologically. When working up HTS output, we typically have to make decisions based on incomplete information. One question that I'd like you think about is how would knowing that a catechol matched a PAINS substructure change your perception of that compound as a hit from HTS?
So before I go on it is perhaps a good idea to say what is meant the term 'PAINS' which is an acronym for Pan Assay INterference compoundS. In the literature and blogs, the term 'PAINS' appears to mean one of the following:
2) Compounds that have been demonstrated by experiment to behave pathologically in screening
3) Substructural definitions such as, but not necessarily, those described in the original PAINS article, claimed to be predictive of pathological behavior in screening
4) Compounds that matching substructural definitions such as, but not necessarily, those described in the original PAINS article
5) Compounds (or classes of compounds) believed to have the potential to behave pathologically in screens.
There is still some ambiguity within the categories and, in the original PAINS study, PAINS are identified by frequent-hitter behavior in an assay panel. Do you think that is justified to label compounds that fail to hit a single assay in the panel as PAINS simply because they share substructural elements with frequent-hitters? Category 5 is especially problematic because it can be difficult to know if those denouncing a class of compounds as PAINS are doing so on the basis of relevant experimental observations, model-based prediction or 'expert' opinion. I'd guess that those doing the denouncing often don't know either. Drug discovery suffers from blurring of what has been measured with what has been opined and this post should give you a better idea of what I'm getting at here.
There is still some ambiguity within the categories and, in the original PAINS study, PAINS are identified by frequent-hitter behavior in an assay panel. Do you think that is justified to label compounds that fail to hit a single assay in the panel as PAINS simply because they share substructural elements with frequent-hitters? Category 5 is especially problematic because it can be difficult to know if those denouncing a class of compounds as PAINS are doing so on the basis of relevant experimental observations, model-based prediction or 'expert' opinion. I'd guess that those doing the denouncing often don't know either. Drug discovery suffers from blurring of what has been measured with what has been opined and this post should give you a better idea of what I'm getting at here.
This is a good point to summarize the original PAINS study. Compounds were identified as PAINS on the basis of frequent-hitter behavior in a panel of six AlphaScreen assays for inhibition of protein-protein interactions. The results of the study were a set of substructural patterns and a summary of the frequent hitter associated with each pattern. The original PAINS study invokes literature studies and four instances of 'personal communication' in support of the claim that PAINS filters are predictive of pathological behavior in screening although, in the data analysis context, this 'evidence' should be regarded as anecdotal and circumstantial. Neither chemical structures nor assay data were disclosed in the original PAINS study and the data must be regarded as proprietary.
The PAINS substructural patterns would certainly be useful to anybody using AlphaScreen. My criticism of the 'PAINS field' is not of the substructural patterns themselves (or indeed of attempts to identify compounds likely to behave pathologically when screened) but of the manner in which they are extrapolated out of their applicability domain. I would regard interpreting frequent-hitter behavior in a panel of six AlphaScreen assays as pan-assay interference as a significant extrapolation?
The PAINS substructural patterns would certainly be useful to anybody using AlphaScreen. My criticism of the 'PAINS field' is not of the substructural patterns themselves (or indeed of attempts to identify compounds likely to behave pathologically when screened) but of the manner in which they are extrapolated out of their applicability domain. I would regard interpreting frequent-hitter behavior in a panel of six AlphaScreen assays as pan-assay interference as a significant extrapolation?
But I have droned on enough so now let's take a look at some what BW2014 has to say:
"Artefacts have subversive reactivity that masquerades as drug-like binding and yields false signals across a variety of assays [1,2]. These molecules — pan-assay interference compounds, or PAINS — have defined structures, covering several classes of compound (see ‘Worst offenders)."
I don't think that it is correct to equate artefacts with reactivity since compounds that absorb or fluoresce strongly or that quench fluorescence can all interfere with assays without actually reacting with anything. My bigger issue with this statement is claiming "a variety of assays" when the PAINS assay panel consisted of six AlphaScreen assays. Strictly, we should be applying the term 'artefact' to assay results rather than compounds but that would be nitpicking. Let's continue from BW2014:
"In a typical academic screening library, some 5–12% of compounds are PAINS [1]."
Again this makes a lot of sense and I would add that sometimes pathological behavior of compounds in assays can be discerned by looking at the concentration response of signal. Direct (i.e. label-free) quantification is particularly valuable and surface plasmon resonance can also characterize binding stoichiometry which can be diagnostic of pathological behavior in screens. However, the above advice begs the question why a panel of six assays with the same readout was chosen for a study of pan assay interference.
I'll finish off with some questions that I'd like you to think about. Would you consider a compounds hitting all assays in a panel composed of six AlphaScreen assays to constitute evidence for pan assay interference by that compound? Given the results from 40 HTS campaigns, how would you design a study to characterize pan-assay interference? How would knowing that a catechol was an efficient quencher of singlet oxygen change your perception of that compound as a hit from HTS?
So now that I've distracted you with some questions, I'm going to try to slip away unnoticed. In the next PAINS post, I'll be taking a close look at how PAINS have found their way into the J Med Chem guidelines for authors. Before that, I'll try to entertain you with some lighter fare. Please stay tuned for Confessions of a Units Nazi...
"In a typical academic screening library, some 5–12% of compounds are PAINS [1]."
Do these figures reflect actual analysis on real academic screening libraries? Have these PAINS actually been observed to behave pathologically in real assays or are they simply been predicted to behave badly? Does the analysis take account of the different PAIN levels associated with different PAINS substructures? Continuing from BW2014:
“Most PAINS function as reactive chemicals rather than
discriminating drugs. They give false readouts in a variety of ways. Some are
fluorescent or strongly coloured. In certain assays, they give a positive
signal even when no protein is present. Other compounds can trap the toxic or
reactive metals used to synthesize molecules in a screening library or used as
reagents in assays.”
“PAINS often interfere with many other proteins as well as
the one intended."
At the risk of appearing repetitive, it is not clear exactly what is meant by the term 'PAINS' here. How many compounds identified as PAINS in the original study were actually shown by experiment to function as "reactive chemicals" under assay conditions? How many compounds identified as PAINS in the original study were actually shown to "interfere with many other proteins"? How many compounds identified as PAINS in the original study were actually shown to interact with even one of the proteins used in the PAINS assay panel? This would have been a good point to have mentioned that singlet oxygen quenchers and scavengers can interfere with the AlphaScreen detection used in all six assays of the original PAINS assay panel.
BW2014 offers some advice on PAINS-proof drug discovery and I'll make the observation that there is an element of 'do as I say, not as I do' to some of this advice. BW2014 suggests:
“Scan compounds for functional groups that could have
reactions with, rather than affinity for, proteins.”
You should always be concerned about potential electrophilicity of screening hits (I had two 'levels' of electron-withdrawing group typed as SMARTS vector bindings in my Pharma days although I accept that may have been a bit obsessive) but you also need to be aware that covalent bond formation between protein and ligand is a perfectly acceptable way to engage targets.
The following advice from BW2014 is certainly sound:
The following advice from BW2014 is certainly sound:
“Check the literature. Search by both chemical similarity
and substructure to see if a hit interacts with unrelated proteins or has been
implicated in non-drug-like mechanisms.”
This is a good point to mention that singlet oxygen quenchers and scavengers can interfere with the AlphaScreen detection used in the six assays of the original PAINS assay panel. I realize it is somewhat uncouth to say so but the original PAINS study didn't exactly scour the literature on quenchers and scavenger of singlet oxygen. For example DABCO is described as a "strong singlet oxygen quencher" without any supporting references.
BW2014 makes this recommendation:
This is a good point to mention that singlet oxygen quenchers and scavengers can interfere with the AlphaScreen detection used in the six assays of the original PAINS assay panel. I realize it is somewhat uncouth to say so but the original PAINS study didn't exactly scour the literature on quenchers and scavenger of singlet oxygen. For example DABCO is described as a "strong singlet oxygen quencher" without any supporting references.
BW2014 makes this recommendation:
"Assess assays. For each hit, conduct at least one assay that
detects activity with a different readout. Be wary of compounds that do not
show activity in both assays. If possible, assess binding directly, with a
technique such as surface plasmon resonance."
Again this makes a lot of sense and I would add that sometimes pathological behavior of compounds in assays can be discerned by looking at the concentration response of signal. Direct (i.e. label-free) quantification is particularly valuable and surface plasmon resonance can also characterize binding stoichiometry which can be diagnostic of pathological behavior in screens. However, the above advice begs the question why a panel of six assays with the same readout was chosen for a study of pan assay interference.
I'll finish off with some questions that I'd like you to think about. Would you consider a compounds hitting all assays in a panel composed of six AlphaScreen assays to constitute evidence for pan assay interference by that compound? Given the results from 40 HTS campaigns, how would you design a study to characterize pan-assay interference? How would knowing that a catechol was an efficient quencher of singlet oxygen change your perception of that compound as a hit from HTS?
So now that I've distracted you with some questions, I'm going to try to slip away unnoticed. In the next PAINS post, I'll be taking a close look at how PAINS have found their way into the J Med Chem guidelines for authors. Before that, I'll try to entertain you with some lighter fare. Please stay tuned for Confessions of a Units Nazi...