Wednesday 22 February 2023

Structural alerts and assessment of chemical probes

 << previous |

I’ll wrap up (at least for now) the series of posts on chemical probes by returning to the use of cheminformatic models for assessment of the suitability of compounds for use as chemical probes. My view is that there is currently no cheminformatic model, at least in the public domain, that is usefully predictive of the suitability (or unsuitability) of compounds for use as chemical probes and that assessments should therefore be based exclusively on experimental measurements of affinity, selectivity etc. Put another way, acceptable chemical probes will need to satisfy the same criteria regardless of the extent to which they offend the tastes of PAINS filter evangelists (and if PAINS really are as bad as the evangelists would have us believe then they’re hardly going to satisfy these acceptability criteria). My main criticism of PAINS filters (summarized in this comment on the ACS assay interference editorial) is that there is a significant disconnect between dogma and data. 

I’ll start by saying something about cheminformatics since, taken together, the PAINS substructures can be considered as a cheminformatic predictive model. If you’re using a cheminformatic predictive model then you also need to be aware that it will have an applicability domain which is limited by the data used to train and validate the model. Consider, for example, that you have access to a QSAR model for hERG blockade that has been trained and validated using only data for compounds that are protonated at the assay pH.  If you base decisions on predictions for compounds that are neutral under assay conditions then you’d be using the model outside its applicability domain (and therefore in a very weak position to blame the modelers if the shit hits the fan). While cheminformatic predictive models might (or might not) help you get to a desired endpoint more quickly you’ll still need experimental measurements in order to know that you have indeed got the desired end point.

But let’s get back to PAINS filters which were introduced in this 2010 study. PAINS is an acronym for pan-assay interference compounds and you could be forgiven for thinking that PAINS filters were derived by examining chemical structures of compounds that had been shown to exhibit pan-assay interference. However, the original PAINS study doesn’t appear to present even a single example of a compound that is shown experimentally to exhibit pan-assay interference and the medicinal chemistry literature isn’t exactly bursting at the seams with examples of such compounds.

The data set on which the PAINS filters were trained consisted of the hits (assay results in which the response was greater than a threshold when the compound was tested at a single concentration) from six high-throughput screens, each of which used AlphaScreen read-out. Although PAINS filters are touted as predictors of pan-assay interference it would be more accurate to describe them as predictors of frequent-hitter behavior in this particular assay panel (as noted in a previous post promiscuity generally increases as the activity threshold is made more permissive). From a cheminformatic perspective the choice of this assay panel appears to represent a suboptimal design of an experiment to detect and characterize pan-assay interference (especially given that data from “more than 40 primary screening campaigns against enzymes, ion channels, protein-protein interactions, and whole cells” were available for analysis). Those who advocate the use of PAINS filters for the assessment of the suitability of compounds for use as chemical probes (and the Editors-in-Chief of more than one ACS journal) may wish to think carefully about why they are ignoring a similar study based on a larger, more diverse (in terms of targets and read-outs) data set that had been published four years before the PAINS study.     

Although a number of ways in which potential nuisance compounds can reveal their dark sides are discussed in the original PAINS study the nuisance behavior is not actually linked to the frequent-hitter behavior reported for compounds in the assay panel. Also, it can be safely assumed that none of the six protein-protein interaction targets of the PAINS assay panel feature a catalytic cysteine and my view is that any frequent-hitter behavior that is observed in the assay panel for ‘cysteine killers’ is more likely to be due to reaction with (or quenching of) singlet oxygen. It’s also worth pointing out that when compounds are described as exhibiting pan-assay interference (or as frequent hitters) that the relevant nuisance behavior has often been predicted (or assumed) as opposed to being demonstrated with measured data.  I would argue that even a ‘maximal PAINS response’ (the compounds is actually observed as a hit in each of the six assays of the PAINS assay panel) would not rule out the use of a compound as a chemical probe.

I have argued on cheminformatic grounds that it’s not appropriate to use PAINS filters for assessment of potential probes but there’s another reason that those seeking to set standards for chemical probes shouldn’t really be endorsing the use of PAINS filters for this purpose. “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications stresses the importance of Open Science. However, the PAINS structural alerts were trained on proprietary data and using PAINS filters to assess potential chemical probes will ultimately raise questions about the level of commitment to Open Science. I made a very similar point in my comment on the ACS assay interference editorial (Journal of Medicinal Chemistry considers the publication of analyses of proprietary data to be generally unacceptable).

Let’s take a look at “The promise and peril of chemical probes” that was published in Nature Chemical Biology in 2015. The authors state:

“We learned that many of the chemical probes in use today had initially been characterized inadequately and have since been proven to be nonselective or associated with poor characteristics such as the presence of reactive functionality that can interfere with common assay features [3] (Table 2). The continued use of these probes poses a major problem: tens of thousands of publications each year use them to generate research of suspect conclusions, at great cost to the taxpayer and other funders, to scientific careers and to the reliability of the scientific literature.”

Let’s take a look at Table 2 (Examples of widely used low-quality probes) from "The promise and peril of chemical probes". You’ll see “PAINS” in the problems column of Table 2 for two of the six low-quality probes in and this rings a number of alarm bells for me. Specifically, it is asserted that flavones are “often promiscuous and can be pan-assay interfering (PAINS) compounds” and Epigallocatechin-3-gallate is a “promiscuous PAINS compound” which raises a number of questions. Were the (unspecified) flavones and Epigallocatechin-3-gallate actually observed to be promiscuous and if so what activity threshold was used for quantifying promiscuity? Were any of the (unspecified) flavones or Epigallocatechin-3-gallate actually observed to exhibit pan-assay interference?  Were affinity and selectivity measurements actually available for the (unspecified) flavones or Epigallocatechin-3-gallate?

I’ll conclude the post by saying something about cheminformatic predictive models. First, to use a cheminformatic predictive model outside its applicability domain is a serious error (and will cast doubts on the expertise of anybody doing so). Second, predictions might (or might not) help you get to a desired end point but you’ll still need measured data to establish that you’ve got to the desired end point or that a compound is unfit for a particular purpose.  

1 comment:

Keith Hojnacki said...


This is very nice information. Thank you so much for sharing your knowledge.

phoenix detox
drug and alcohol detox