Wednesday, 22 February 2023

Structural alerts and assessment of chemical probes

 << previous |

I’ll wrap up (at least for now) the series of posts on chemical probes by returning to the use of cheminformatic models for assessment of the suitability of compounds for use as chemical probes. My view is that there is currently no cheminformatic model, at least in the public domain, that is usefully predictive of the suitability (or unsuitability) of compounds for use as chemical probes and that assessments should therefore be based exclusively on experimental measurements of affinity, selectivity etc. Put another way, acceptable chemical probes will need to satisfy the same criteria regardless of the extent to which they offend the tastes of PAINS filter evangelists (and if PAINS really are as bad as the evangelists would have us believe then they’re hardly going to satisfy these acceptability criteria). My main criticism of PAINS filters (summarized in this comment on the ACS assay interference editorial) is that there is a significant disconnect between dogma and data. 

I’ll start by saying something about cheminformatics since, taken together, the PAINS substructures can be considered as a cheminformatic predictive model. If you’re using a cheminformatic predictive model then you also need to be aware that it will have an applicability domain which is limited by the data used to train and validate the model. Consider, for example, that you have access to a QSAR model for hERG blockade that has been trained and validated using only data for compounds that are protonated at the assay pH.  If you base decisions on predictions for compounds that are neutral under assay conditions then you’d be using the model outside its applicability domain (and therefore in a very weak position to blame the modelers if the shit hits the fan). While cheminformatic predictive models might (or might not) help you get to a desired endpoint more quickly you’ll still need experimental measurements in order to know that you have indeed got the desired end point.

But let’s get back to PAINS filters which were introduced in this 2010 study. PAINS is an acronym for pan-assay interference compounds and you could be forgiven for thinking that PAINS filters were derived by examining chemical structures of compounds that had been shown to exhibit pan-assay interference. However, the original PAINS study doesn’t appear to present even a single example of a compound that is shown experimentally to exhibit pan-assay interference and the medicinal chemistry literature isn’t exactly bursting at the seams with examples of such compounds.

The data set on which the PAINS filters were trained consisted of the hits (assay results in which the response was greater than a threshold when the compound was tested at a single concentration) from six high-throughput screens, each of which used AlphaScreen read-out. Although PAINS filters are touted as predictors of pan-assay interference it would be more accurate to describe them as predictors of frequent-hitter behavior in this particular assay panel (as noted in a previous post promiscuity generally increases as the activity threshold is made more permissive). From a cheminformatic perspective the choice of this assay panel appears to represent a suboptimal design of an experiment to detect and characterize pan-assay interference (especially given that data from “more than 40 primary screening campaigns against enzymes, ion channels, protein-protein interactions, and whole cells” were available for analysis). Those who advocate the use of PAINS filters for the assessment of the suitability of compounds for use as chemical probes (and the Editors-in-Chief of more than one ACS journal) may wish to think carefully about why they are ignoring a similar study based on a larger, more diverse (in terms of targets and read-outs) data set that had been published four years before the PAINS study.     

Although a number of ways in which potential nuisance compounds can reveal their dark sides are discussed in the original PAINS study the nuisance behavior is not actually linked to the frequent-hitter behavior reported for compounds in the assay panel. Also, it can be safely assumed that none of the six protein-protein interaction targets of the PAINS assay panel feature a catalytic cysteine and my view is that any frequent-hitter behavior that is observed in the assay panel for ‘cysteine killers’ is more likely to be due to reaction with (or quenching of) singlet oxygen. It’s also worth pointing out that when compounds are described as exhibiting pan-assay interference (or as frequent hitters) that the relevant nuisance behavior has often been predicted (or assumed) as opposed to being demonstrated with measured data.  I would argue that even a ‘maximal PAINS response’ (the compounds is actually observed as a hit in each of the six assays of the PAINS assay panel) would not rule out the use of a compound as a chemical probe.

I have argued on cheminformatic grounds that it’s not appropriate to use PAINS filters for assessment of potential probes but there’s another reason that those seeking to set standards for chemical probes shouldn’t really be endorsing the use of PAINS filters for this purpose. “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications stresses the importance of Open Science. However, the PAINS structural alerts were trained on proprietary data and using PAINS filters to assess potential chemical probes will ultimately raise questions about the level of commitment to Open Science. I made a very similar point in my comment on the ACS assay interference editorial (Journal of Medicinal Chemistry considers the publication of analyses of proprietary data to be generally unacceptable).

Let’s take a look at “The promise and peril of chemical probes” that was published in Nature Chemical Biology in 2015. The authors state:

“We learned that many of the chemical probes in use today had initially been characterized inadequately and have since been proven to be nonselective or associated with poor characteristics such as the presence of reactive functionality that can interfere with common assay features [3] (Table 2). The continued use of these probes poses a major problem: tens of thousands of publications each year use them to generate research of suspect conclusions, at great cost to the taxpayer and other funders, to scientific careers and to the reliability of the scientific literature.”

Let’s take a look at Table 2 (Examples of widely used low-quality probes) from "The promise and peril of chemical probes". You’ll see “PAINS” in the problems column of Table 2 for two of the six low-quality probes in and this rings a number of alarm bells for me. Specifically, it is asserted that flavones are “often promiscuous and can be pan-assay interfering (PAINS) compounds” and Epigallocatechin-3-gallate is a “promiscuous PAINS compound” which raises a number of questions. Were the (unspecified) flavones and Epigallocatechin-3-gallate actually observed to be promiscuous and if so what activity threshold was used for quantifying promiscuity? Were any of the (unspecified) flavones or Epigallocatechin-3-gallate actually observed to exhibit pan-assay interference?  Were affinity and selectivity measurements actually available for the (unspecified) flavones or Epigallocatechin-3-gallate?

I’ll conclude the post by saying something about cheminformatic predictive models. First, to use a cheminformatic predictive model outside its applicability domain is a serious error (and will cast doubts on the expertise of anybody doing so). Second, predictions might (or might not) help you get to a desired end point but you’ll still need measured data to establish that you’ve got to the desired end point or that a compound is unfit for a particular purpose.  

Wednesday, 15 February 2023

Frequent-hitter behavior and promiscuity

I’ll be discussing promiscuity in this post and, if there’s one thing that religious leaders and drug discovery scientists agree on, it’s that promiscuity is a Bad Thing. In the drug discovery context compounds that bind to many targets or exhibit ‘activity’ in many assays are described as promiscuous. I first became aware that promiscuity was a practical (as opposed to a moral) problem when we started to use high-throughput screening (HTS) at Zeneca in the mid-1990s and we soon learned that not all screening output smells of roses (the precursor company ICI had been a manufacturer of dyestuffs which are selected/designed to be brightly colored and for their ability to stick to stuff).

You’ll often encounter assertions in the scientific literature that compounds are promiscuous and my advice is to carefully check the supporting evidence if you plan to base decisions on the information. In many cases, you’ll find out that the ‘promiscuity’ is actually predicted and the problem with many cheminformatic models is that you often (usually?) don’t know how predictive the model is going to be for the compounds that you’re interested in. You have to be careful basing decisions on predictions because it is not unknown for predictivity of models and strengths of trends in data to be overstated. As detailed in this article, relationships between promiscuity (defined as number of assays for which ‘activity’ exceeds a specified threshold) and physicochemical descriptors such as lipophilicity or molecular weight are made to appear rather stronger than they actually are. Scope of models may also be overstated and claims that compounds exhibit pan-assay interference have been made on the basis that the compounds share structural features with other compounds (the structures were not disclosed) that were identified as frequent-hitters in a panel of six assays that all use the AlphaScreen read-out.

The other reason that you need to be wary of statements that compounds are promiscuous is that the number of assays for which ‘activity’ exceeds a threshold increases as you make the threshold more permissive (I was actually taught about the relationship between permissiveness and promiscuity by the Holy Ghost Fathers at high school in Port of Spain). I’ve ranked some different activity thresholds by permissiveness in Figure 1 that will hopefully give you a clearer idea of what I’m getting at. In general, it is prudent to be skeptical of any claim that promiscuity using a highly permissive activity threshold (e.g., ≥ 50% response at 10 μM) is necessarily relevant in situations where the level of activity against the target of interest is much greater (e.g., IC50 = 20 nM with well-behaved concentration response and confirmed by affinity measurement in SPR assay). My own view is that compounds should only be described as promiscuous when concentration responses have been measured for the relevant ‘activities’ and I prefer to use the term ‘frequent-hitter’ when ‘activity’ is defined in terms of response in the assay read-out that exceeds a particular cut off value.

Frequent-hitter behavior is a particular concern in analysis of HTS output and an observation that a hit compound in the assay of interest also hits in a number of other assays raises questions about whether further work on the compound is justified.  In a comment on the ACS assay interference editorial, I make the point that the observation that a compound is a frequent hitter may reflect interference with read-out (which I classified as Type 1 behavior) or an undesirable mechanism of action (which I classified as Type 2 behavior). It is important to make a distinction between these two types of behavior because they are very different problems that require very different solutions. One criticism that I would make of the original PAINS study, the chemical con artists perspective in Nature and the ACS assay interference editorial is that none of these articles make a distinction between these two types of nuisance behavior.

I’ll first address interference with assay read-out and the problem for the drug discovery scientist is that the ‘activity’ is not real. One tactic for dealing with this problem is to test the hit compounds in an assay that uses a different read-out although, as described in this article by some ex-AstraZeneca colleagues, it may be possible to assess and even correct for the interference using a single assay read-out. Interference with read-out should generally be expected to increase as the activity threshold is made more permissive (this is why biophysical methods are often preferred for detection and quantitation of fragment binding) and you may find that a compound that interferes with a particular assay read-out at 10 μM does not exhibit significant interference at 100 nM. Interference with read-out should be seen as a problem with the assay rather than a problem with the compound. 

An undesirable mechanism of action is a much more serious problem than interference with read-out and testing hit compounds in an assay that uses a different read-out doesn’t really help because the effects on the target are real.  Some undesirable mechanisms of action such as colloidal aggregate formation are relatively easy to detect (see Aggregation Advisor website) but determining the mechanism of action typically requires significant effort and is more challenging when potency is low. An undesirable mechanism of action should be seen as a problem with the compound rather than a problem with the assay and my view is that this scenario should not be labelled as assay interference.

I’ll wrap up with a personal perspective on frequent-hitters and analysis of HTS output although I believe my experiences were similar to those of others working in industry at the time. From the early days of HTS at Zeneca where I worked it was clear that many compounds with ‘ugly’ molecular structures were getting picked up as hits but it was often difficult to demonstrate objectively that ugly hits were genuinely unsuitable for follow-up. We certainly examined frequent-hitter behavior although some ‘ugly’ hits were not frequent-hitters. We did use SMARTS-based substructural flags (referred to as the ‘de-crapper’ by some where I worked) for processing HTS output and we also looked at structural neighborhoods for hit structures using Flush (the lavatorial name of the software should provide some insight into how we viewed analysis of HTS output). The tactics we used at Zeneca (and later at AstraZeneca) were developed using real HTS data and I don’t think anybody would have denied that there was a subjective element to the approaches that we used.    

Wednesday, 8 February 2023

Chemical probes and permeability

<< previous || next >>

I’ll start this post by with reference to a disease that some of you many never have heard of. Chagas disease is caused by the very nasty T. cruzi parasite (not to be confused with the even nastier American politician) and is of particular interest in Latin America where the disease is endemic.  T. cruzi parasites have an essential requirement for ergosterol and, as discussed in C2010, are potentially vulnerable to inhibition of sterol 14α-demethylase (CYP51), which catalyzes the conversion of lanosterol to ergosterol.  However, the CYP51 inhibitor posaconazole (an antifungal medication) showed poor efficacy in a clinical trials for chronic Chagas disease. Does this mean that CYP51 is a bad target?  The quick answer is “maybe but maybe not” because we can’t really tell whether the lack of efficacy is due to irrelevance of the target or inadequate exposure.

We commonly invoke the free drug hypothesis (FDH) in drug design which means that we assume that the free concentration at the site of action is the same as the free plasma concentration (the term ‘free drug theory’ is also commonly used although I prefer FDH). The FDH is covered in the S2010 (see Box 1 and 2) and B2013 articles and, given that the targets of small molecule drugs tend to be intracellular, I’ll direct you to the excellent Smith & Rowland perspective on intracellular and intraorgan concentrations of drugs.  When we invoke the FDH we’re implicitly assuming that the drug can easily pass through barriers, such as the lipid bilayers that enclose cells, to get to the site of action.  In the absence of active transport, the free concentration at the site of action of a drug will tend to lag behind the free plasma concentration with the magnitude of the lag generally decreasing with permeability. Active transport (which typically manifests itself as efflux) is a more serious problem from the design perspective because it leads to even greater uncertainty in the free drug concentration at the site of action and it’s also worth remembering that transporter expression may vary with cell type. It’s worth mentioning that uncertainty in the free concentration at the site of action is even greater when targeting intracellular pathogens, as is the case for Chagas disease, malaria and tuberculosis.

Some may see chemical probes as consolation prizes in the drug discovery game and, while this may sometimes be the case, we really need to be thinking of chemical probes as things that need to be designed. As is well put in “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications: 

“But drugs are different from chemical probes. Drugs don’t necessarily need to be as selective as high-quality chemical probes. They just need to get the job done on the disease and be safe to use. In fact, many drugs act on multiple targets as part of their therapeutic mechanism.”

High selectivity and affinity are clear design objectives and, to some extent, optimization of affinity will tend to lead to higher selectivity.  High quality chemical probes for intracellular targets need to be adequately permeable and should is should not be subject to active transport. The problems caused by active efflux are obvious because chemical probes need to get into cells in order to engage intracellular targets but there’s another reason that adequate permeability and minimal active transport are especially important for chemical probes. In order to interpret results, you need to know the free concentration of the probe at the site of action and active transport, whether it manifests itself as efflux or influx, leads to uncertainty the intracellular free concentration. Although it may be possible to measure intracellular free concentration (see M2013) it’s fiddly to do so if you’re trying to measure target engagement at the same time and it’s not generally possible to do so in vivo. It's much better to be in a position to invoke the FDH with confidence and this point is well made in the Smith and Rowland perspective:

“Many misleading assumptions about drug concentrations and access to drug targets are based on total drug. Correction, if made, is usually by measuring tissue binding, but this is limited by the lack of homogenicity of the organ or compartment. Rather than looking for technology to measure the unbound concentration it may be better to focus on designing high lipoidal permeable molecules with a high chance of achieving a uniform unbound drug concentration.”

If the intention is to use a chemical probe for in vivo studies then you’ll need to be confident that adequate exposure at the site of action can be achieved. My view is that it would be difficult to perform a meaningful assessment of the suitability of a chemical probe for in vivo studies without relevant experimental in vivo measurements. You might, however, be able to perform informative in vivo experiments with a chemical probe in the absence of existing pharmacokinetic measurements (provided that you monitor plasma levels and know how tightly the probe is bound by plasma proteins) although you’ll still need to invoke the FDH for intracellular targets.  

If you’re only going to use a chemical probe in cell-based experiments then you really don’t need to worry about achieving oral exposure and this has implications for probe design. The requirement for a chemical probe to have acceptable pharmacokinetic characteristics imposes constraints on design (which may make it more difficult to achieve the desired degree of selectivity) while pharmacokinetic optimization is likely to consume significant resources. As is the case for chemical probes intended for in vivo use, you’ll want to be in a position to invoke the FDH.

In this post, I’ve argued that you need to be thinking very carefully about passive permeability and active transport (whether it leads to efflux or influx) when designing, using or assessing chemical probes. In particular, having experimental measurements available that show that a chemical probe exhibits acceptable passive permeability and is not actively transported will greatly increase confidence that the chemical probe is indeed fit for purpose. It’s not my intention to review methods for measuring passive permeability or active transport in this post although I’ll point you to the B2018, S2021, V2011 and X2021 articles in case any of these are helpful.