Monday, 2 January 2023

Assessment of chemical probes

| next >>

I’ll be taking a look at some of the criteria, specifically structural alerts, by which chemical probes are assessed and here’s the link to the Chemical Probes Portal. Before getting into the post there are a couple of points that I need to stress. First, structural alerts derived from analysis of screening hits (defined as responses that exceed a threshold when assayed at a particular concentration) are not necessarily useful for assessing higher affinity compounds for which concentration responses have been determined. Second, chemical probes will have to satisfy the same set of acceptability criteria whether or not they trigger structural alerts.      

I’ll start by commenting on “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications since it was this article that triggered the blog post.  I consider most of the views expressed in the in the article to be sound although I disagree with much of what is stated in the following paragraph:

“The first essential thing that needs to be done is to eliminate the really bad nuisance compounds, which can have problematic behavior—like being non-specifically very reactive with proteins; forming colloidal aggregates that non-specifically adsorb and inactivate proteins; exerting toxicity toward cells, for example through a membrane damaging effect called phospholipidosis; or exhibiting spectral or fluorescence properties that interfere with the biological assay read-out. These undesirable compounds are often referred to as Pan Assay Interference or PAINS compounds, as highlighted by Jonathan Baell [4]. There are software filters or algorithms available that should be used routinely to identify any risk of such chemical promiscuity and simple lab assays should be run to check for the various problematic properties we mentioned. Such compounds should never be considered further or used as chemical probes. They should be excluded from compound libraries. Yet many are sold by commercial vendors as chemical probes and widely used.”

In 2017 a number of ACS journals simultaneously published “The Ecstasy and Agony of Assay Interference Compounds” editorial and I believe that a number of points raised in a comment on this editorial are still relevant to dealing with nuisance compounds. In the comment, I classified bad behavior of screening ‘actives’ as Type 1 (compound hits in the assay but does not affect target function) and Type 2 (compound affects target function through an undesirable mechanism of action). These are two very different problems and each requires very different solutions. Type 1 behavior, which can also be described as interference with read-out, is primarily a problem from the perspective of analysis of high-throughput screening (HTS) output because you don’t know whether observed ‘activity’ is real or not. From the perspective of probe promiscuity, Type 1 behavior is much less of a problem than Type 2 behavior because the ‘activity’ is not real. If you’re trying to decide whether a potential chemical probe is acceptable then genuine activity at 50 nM against another protein is going to hurt a whole lot more than responses of >50% in several assays at a test concentration of 10 μM. 

It is asserted in the conversation that there are “software filters or algorithms available that should be used routinely to identify any risk of such chemical promiscuity”. When recommending their use of predictive models for assessment of potential probes, it’s important to be aware of their inherent limitations. Specifically, models derived from analysis of data have applicability domains that are imposed by the data used to build the models. For example, PAINS filters were derived from analysis of the output of six screens that all use the same read-out (AlphaScreen) and this limits the applicability domain of the PAINS filter model to prediction of frequent-hitter behavior in AlphaScreen assays. It is asserted in the conversation that commercial vendors are selling compounds as chemical probes that are unfit for purpose and I strongly recommend that anybody making such assertions should carefully examine the supporting evidence.  I would argue that sharing structural features with compounds (for which structures that have not been disclosed) that have been observed to exhibit frequent-hitter behavior when screened at a single concentration (e.g., 10 μM) would not credibly support an assertion that a compound is unsuitable for use as a chemical probe. A specific criticism I would make of the way that structural alerts (especially those derived using proprietary data) are used is that it is sometimes suggested, for example in the ACS assay interference editorial, that HTS hits that don’t trigger structural alerts can be checked less thoroughly than hits that do trigger structural alerts.

The Information Centre of the Chemical Probes Portal includes a “Toxicophores and PAINS Alerts” section in which it is correctly stated that “the presence of the toxicophore or PAINS substructures within the chemical structure of a compound does not necessarily mean that it will be non-specifically active or toxic, or give rise to assay interference”. The “Toxicophores and PAINS Alerts” section might work better as a “Structural Alerts” section and the toxicophores citation appears to be incorrect (reference 10 actually cites an article on toxicity risks associated with excessive lipophilicity).  If doing this, I would recommend saying something about the applicability domains of any structural alerts that are highlighted and considering the inclusion of Aggregator Advisor (link to article)  and BadApple (here's link to article)  

Alternatively, it might be an idea to create separate “Nuisance Compounds” and “Toxicophores” sections because these are very different problems. I would generally recommend the use of the term “nuisance compounds” since PAINS and colloidal aggregators are sometimes treated as separate categories of bad actor, as is the case in the ACS assay interference editorial, and the criteria for labelling compounds as PAINS are ambiguous.  It would certainly be useful to include some reviews on assay interference, such as this one, in a “Nuisance Compounds” section. I quite like this article by former colleagues which shows how interference with read-out can be assessed and even corrected for.  As for a “Structural Alerts” section, the applicability domains of any predictive models should be indicated so people don’t end up using models that have been trained using hits from screening at 10 μM to assess probes with 20 nM affinity.

This is a good point at which to wrap up and it’s worth stressing that the essence of the criticism of PAINS filters is simply that the rhetoric is not supported by the data. Those like me who are critical of the way that PAINS filters are used are certainly not suggesting that screening hits all smell of roses (back in 1995 I used the Daylight toolkits to build the SMARTS-matching software that was used in the Zeneca ‘de-crapper’ and colleagues also created the Flush software) nor are we denying that assay interference is a serious problem. Although I believe that it is certainly helpful to have scientists who have worked with HTS data share their experiences and opinions with respect to hit quality, I would argue that there are dangers in giving such opinions too much weight (this article may be of interest) especially when data that might be used to justify the opinions are proprietary. Specifically, I would strongly advise against making statements that a compound is unfit for use as a chemical probe unless the assertion is supported by measured data in the public domain for the compound in question.

I’ll leave it there for now. In the next post on chemical probes, I’ll be taking a look at permeability.

2 comments:

willem said...

Think we can agree that rules imply exceptions Pete, and the domain of application needs to be heeded for any model. Many users are aware of neither, as you highlight. If you look at the 7-year-itch paper you'll note that the observations in a corporate collection do not always support the rules (in this case PAINS), and such analyses are fairly easily done for every rule. Still, teasing apart the data from highly biased corporate data sets is a problem in itself.

https://pubs.acs.org/doi/10.1021/acschembio.7b00903

Peter Kenny said...

Good points, Willem, and there's nothing I disagree with. I have no problem with people publishing models for nuisance behavior in assays (even when these models have been derived from proprietary data). The trouble starts when the models are given what I'll call a 'legal' basis and predictions from the model become 'evidence' and my view is that the JMC editors showed poor judgement by incorporating PAINS filters into the author guidelines. One point that I would make about the 7-year itch paper is that there appeared to be a reluctance to respond to (or even acknowledge) criticisms of the original study (I was reminded of 'can do better, must do better' on my high school report cards in the days when I was taught by the Holy Ghost Fathers in Port of Spain.