I’ll be discussing promiscuity in this post and, if there’s one thing that religious leaders and drug discovery scientists agree on, it’s that promiscuity is a Bad Thing. In the drug discovery context compounds that bind to many targets or exhibit ‘activity’ in many assays are described as promiscuous. I first became aware that promiscuity was a practical (as opposed to a moral) problem when we started to use high-throughput screening (HTS) at Zeneca in the mid-1990s and we soon learned that not all screening output smells of roses (the precursor company ICI had been a manufacturer of dyestuffs which are selected/designed to be brightly colored and for their ability to stick to stuff).
You’ll often encounter assertions in the scientific literature that compounds are promiscuous and my advice is to carefully check the supporting evidence if you plan to base decisions on the information. In many cases, you’ll find out that the ‘promiscuity’ is actually predicted and the problem with many cheminformatic models is that you often (usually?) don’t know how predictive the model is going to be for the compounds that you’re interested in. You have to be careful basing decisions on predictions because it is not unknown for predictivity of models and strengths of trends in data to be overstated. As detailed in this article, relationships between promiscuity (defined as number of assays for which ‘activity’ exceeds a specified threshold) and physicochemical descriptors such as lipophilicity or molecular weight are made to appear rather stronger than they actually are. Scope of models may also be overstated and claims that compounds exhibit pan-assay interference have been made on the basis that the compounds share structural features with other compounds (the structures were not disclosed) that were identified as frequent-hitters in a panel of six assays that all use the AlphaScreen read-out.
The other reason that you need to be wary of statements that compounds are promiscuous is that the number of assays for which ‘activity’ exceeds a threshold increases as you make the threshold more permissive (I was actually taught about the relationship between permissiveness and promiscuity by the Holy Ghost Fathers at high school in Port of Spain). I’ve ranked some different activity thresholds by permissiveness in Figure 1 that will hopefully give you a clearer idea of what I’m getting at. In general, it is prudent to be skeptical of any claim that promiscuity using a highly permissive activity threshold (e.g., ≥ 50% response at 10 μM) is necessarily relevant in situations where the level of activity against the target of interest is much greater (e.g., IC50 = 20 nM with well-behaved concentration response and confirmed by affinity measurement in SPR assay). My own view is that compounds should only be described as promiscuous when concentration responses have been measured for the relevant ‘activities’ and I prefer to use the term ‘frequent-hitter’ when ‘activity’ is defined in terms of response in the assay read-out that exceeds a particular cut off value.
Frequent-hitter behavior is a particular concern in analysis of HTS output and an observation that a hit compound in the assay of interest also hits in a number of other assays raises questions about whether further work on the compound is justified. In a comment on the ACS assay interference editorial, I make the point that the observation that a compound is a frequent hitter may reflect interference with read-out (which I classified as Type 1 behavior) or an undesirable mechanism of action (which I classified as Type 2 behavior). It is important to make a distinction between these two types of behavior because they are very different problems that require very different solutions. One criticism that I would make of the original PAINS study, the chemical con artists perspective in Nature and the ACS assay interference editorial is that none of these articles make a distinction between these two types of nuisance behavior.
I’ll first address interference with assay read-out and the problem for the drug discovery scientist is that the ‘activity’ is not real. One tactic for dealing with this problem is to test the hit compounds in an assay that uses a different read-out although, as described in this article by some ex-AstraZeneca colleagues, it may be possible to assess and even correct for the interference using a single assay read-out. Interference with read-out should generally be expected to increase as the activity threshold is made more permissive (this is why biophysical methods are often preferred for detection and quantitation of fragment binding) and you may find that a compound that interferes with a particular assay read-out at 10 μM does not exhibit significant interference at 100 nM. Interference with read-out should be seen as a problem with the assay rather than a problem with the compound.
An undesirable mechanism of action is a much more serious problem than interference with read-out and testing hit compounds in an assay that uses a different read-out doesn’t really help because the effects on the target are real. Some undesirable mechanisms of action such as colloidal aggregate formation are relatively easy to detect (see Aggregation Advisor website) but determining the mechanism of action typically requires significant effort and is more challenging when potency is low. An undesirable mechanism of action should be seen as a problem with the compound rather than a problem with the assay and my view is that this scenario should not be labelled as assay interference.
I’ll wrap up with a personal perspective on frequent-hitters and analysis of HTS output although I believe my experiences were similar to those of others working in industry at the time. From the early days of HTS at Zeneca where I worked it was clear that many compounds with ‘ugly’ molecular structures were getting picked up as hits but it was often difficult to demonstrate objectively that ugly hits were genuinely unsuitable for follow-up. We certainly examined frequent-hitter behavior although some ‘ugly’ hits were not frequent-hitters. We did use SMARTS-based substructural flags (referred to as the ‘de-crapper’ by some where I worked) for processing HTS output and we also looked at structural neighborhoods for hit structures using Flush (the lavatorial name of the software should provide some insight into how we viewed analysis of HTS output). The tactics we used at Zeneca (and later at AstraZeneca) were developed using real HTS data and I don’t think anybody would have denied that there was a subjective element to the approaches that we used.
7 comments:
To be fair, to figure out MOA of an extremely potent compound is not an easy task either.
Hi Pete,
I appreciate the distinction between undesirable mechanisms of action vs assy interference. That said, given that teams often have to sort through too many hits, the specific type of pathology may not matter as long as they find advanceable chemical matter. In cases where hits are few and far between, teams are likely to be more forgiving anyway and dive into the bottom of the proverbial barrel to avoid throwing out any babies, to mix metaphors.
Put another way, I can think of plenty of examples where teams have followed up on false positives or bad actors, wasting tremendous amounts of time and resources, but I can't think of any where teams failed to follow up on molecules they "should" have.
Hi Dan,
The main point of the blog post was to address the problem of over-interpretation of frequent-hitter behavior and my experience with analysis of HTS output was that, as you point out, the primary objective was to identify advanceable chemical matter. While I was in Wilmington (1997-1999) we did try to see whether frequent-hitter behavior was ‘concentrated’ for particular assay read-outs although the results of the analyses were unconvincing. During that time, we ran HTS campaigns against a cysteine protease and a protein tyrosine phosphatase and noticed that some nasty-looking (potentially accessible redox chemistry) compounds hit in both assays. Later on, the trend was to run a high-throughput concentration response and this would allow some dubious-looking hit compounds to be put out of their (and our) misery with a clear conscience.
I’d have no issue with PAINS filters if they’d just been presented as structural alerts for frequent-hitter behavior and some of the compounds claimed to be PAINS do indeed look pretty nasty. However, I do have a very big issue with using PAINS filters to evaluate assay results reported in manuscripts and I made this quite clear in this comment on the ACS assay interference editorial. I would also question the use of a predictor of frequent-hitter behavior in a small non-diverse assay panel for the assessment of a chemical probe that has 20 nM affinity with a well-behaved concentration response.
Hi Pete,
Can you provide any examples where people have attacked a "chemical probe that has 20 nM affinity with a well-behaved concentration response" solely because it fails a PAINS filter?
Hi Dan, I haven’t got a specific example of a match with a PAINS substructure being invoked in criticism of a probe with 20 nM affinity. Nevertheless, “PAINS” features as the problem for two of six entries in “Table 2 Examples of widely used low-quality probes” of this article. If you had a 1 μM IC50 with well-behaved concentration response and were using a read-out other than AlphaScreen, how much would you worry about a measured ‘PAINS response’ that was maximal (i.e., compound hits in each of the six AlphaScreen assays of the PAINS assay panel)?
Hi Pete,
Are you changing your criterion from IC50 = 20 nM to IC50 = 1000 nM? Lots of bad things can happen at that concentration, such as aggregation - here's an example at 200 nM:
https://practicalfragments.blogspot.com/2009/08/avoiding-will-o-wisps-aggregation.html
So yes, I think it's prudent to worry about "a 1 μM IC50 with well-behaved concentration response... using a read-out other than AlphaScreen" whether or not the compound passes PAINS filters.
Regarding the NCB article, are you suggesting that flavones and epigallocatechin-3-gallate make good chemical probes? Note that PAINS are listed not as "the problem" but "a problem" for these examples.
I honestly can't think of a single example where a well-behaved chemical probe has been pilloried for failing PAINS filters, and I'm genuinely curious to know whether or how often it actually happens.
Hi Dan, the question that I actually posed was: “If you had a 1 μM IC50 with well-behaved concentration response and were using a read-out other than AlphaScreen, how much would you worry about a measured ‘PAINS response’ that was maximal (i.e., compound hits in each of the six AlphaScreen assays of the PAINS assay panel)?” and I’m definitely not asserting that the (unspecified) flavones or epigallocatechin-3-gallate “make good chemical probes”.
On a cheminformatic note, predictive models might (or might not) help you get to an end point but you’ll still need measured data to establish that you’ve actually got to the end point. If the data indicated that the (unspecified) flavones or epigallocatechin-3-gallate were unsuitable as chemical probes (or had not been measured) then this should have been stated in the problems column of Table 2.
Post a Comment