Saturday, 1 April 2023

A clear demonstration of the benefits of long residence time

Residence time is a well-established concept in drug discovery and the belief that off-rate is more important than affinity has many adherents in both academia and industry. The concept has been articulated as follows in a Nature Reviews in Drug Discovery article:

“Biochemical and cellular assays of drug interactions with their target macromolecules have traditionally been based on measures of drug–target binding affinity under thermodynamic equilibrium conditions. Equilibrium binding metrics such as the half-maximal inhibitory concentration (IC50), the effector concentration for half-maximal response (EC50), the equilibrium dissociation constant (Kd) and the inhibition constant (Ki), all pertain to in vitro assays run under closed system conditions, in which the drug molecule and target are present at invariant concentrations throughout the time course of the experiment [1 | 2 | 3 | 4 | 5]. However, in living organisms, the concentration of drug available for interaction with a localized target macromolecule is in constant flux because of various physiological processes.”

I used to be highly skeptical about the argument that equilibrium binding metrics relevant are not relevant in open systems in which the drug concentration varies with time. The key question for me was always how the rate of change in the drug concentration compares with the rate of binding/unbinding (if the former is slower than the latter then the openness of the in vivo system would seem to be irrelevant). I also used to wonder why an equilibrium binding measurement made in an open system (e.g., Kd from isothermal titration calorimetry) should necessarily be more relevant to the in vivo system than an equilibrium binding measurement made in a series of closed systems (e.g., Ki from an enzyme inhibition assay). Nevertheless, I always needed to balance my concerns against the stark reality that the journal impact factor of Nature Reviews of Drug Discovery is a multiple of my underwhelming h-index. 

Any residual doubts about the relevance of residence time completely vanished recently after I examined a manuscript by Prof Maxime de Monne of the Port-au-Prince Institute of Biogerontology. who is currently on secondment to the Budapest Enthalpomics Group (BEG). The manuscript has not yet been made publicly available although, with the help of my associate ‘Anastasia Nikolaeva’ in Tel Aviv, I was able to access it and there is no doubt that this genuinely disruptive study will forever change how we use AI to discover new medicines.

Prof de Monne’s study clearly demonstrates that it is possible to manipulate off-rate independently of on-rate and dissociation constant, provided that binding is enthalpically-driven to a sufficient degree. The underlying mechanism is back-propagation of the binding entropy deficit along the reaction coordinate to the transition state region where the resulting unidirectional conformational changes serve to suppress dissociation of the ligand. The math is truly formidable (my rudimentary understanding of Haitian patois didn’t help either) and involves first projecting the atomic isothermal compressibility matrix into the polarizability tensor before applying the Barone-Samedi transformation for hepatic eigenvalue extraction. ‘Anastasia Nikolaeva’ was also able to ‘liberate’ a prepared press release in which a beaming BEG director Prof Kígyó Olaj explains, “Possibilities are limitless now that we have consigned the tedious and needlessly restrictive Principle of Microscopic Reversibility to the dustbin of history".

Wednesday, 22 February 2023

Structural alerts and assessment of chemical probes

 << previous |

I’ll wrap up (at least for now) the series of posts on chemical probes by returning to the use of cheminformatic models for assessment of the suitability of compounds for use as chemical probes. My view is that there is currently no cheminformatic model, at least in the public domain, that is usefully predictive of the suitability (or unsuitability) of compounds for use as chemical probes and that assessments should therefore be based exclusively on experimental measurements of affinity, selectivity etc. Put another way, acceptable chemical probes will need to satisfy the same criteria regardless of the extent to which they offend the tastes of PAINS filter evangelists (and if PAINS really are as bad as the evangelists would have us believe then they’re hardly going to satisfy these acceptability criteria). My main criticism of PAINS filters (summarized in this comment on the ACS assay interference editorial) is that there is a significant disconnect between dogma and data. 

I’ll start by saying something about cheminformatics since, taken together, the PAINS substructures can be considered as a cheminformatic predictive model. If you’re using a cheminformatic predictive model then you also need to be aware that it will have an applicability domain which is limited by the data used to train and validate the model. Consider, for example, that you have access to a QSAR model for hERG blockade that has been trained and validated using only data for compounds that are protonated at the assay pH.  If you base decisions on predictions for compounds that are neutral under assay conditions then you’d be using the model outside its applicability domain (and therefore in a very weak position to blame the modelers if the shit hits the fan). While cheminformatic predictive models might (or might not) help you get to a desired endpoint more quickly you’ll still need experimental measurements in order to know that you have indeed got the desired end point.

But let’s get back to PAINS filters which were introduced in this 2010 study. PAINS is an acronym for pan-assay interference compounds and you could be forgiven for thinking that PAINS filters were derived by examining chemical structures of compounds that had been shown to exhibit pan-assay interference. However, the original PAINS study doesn’t appear to present even a single example of a compound that is shown experimentally to exhibit pan-assay interference and the medicinal chemistry literature isn’t exactly bursting at the seams with examples of such compounds.

The data set on which the PAINS filters were trained consisted of the hits (assay results in which the response was greater than a threshold when the compound was tested at a single concentration) from six high-throughput screens, each of which used AlphaScreen read-out. Although PAINS filters are touted as predictors of pan-assay interference it would be more accurate to describe them as predictors of frequent-hitter behavior in this particular assay panel (as noted in a previous post promiscuity generally increases as the activity threshold is made more permissive). From a cheminformatic perspective the choice of this assay panel appears to represent a suboptimal design of an experiment to detect and characterize pan-assay interference (especially given that data from “more than 40 primary screening campaigns against enzymes, ion channels, protein-protein interactions, and whole cells” were available for analysis). Those who advocate the use of PAINS filters for the assessment of the suitability of compounds for use as chemical probes (and the Editors-in-Chief of more than one ACS journal) may wish to think carefully about why they are ignoring a similar study based on a larger, more diverse (in terms of targets and read-outs) data set that had been published four years before the PAINS study.     

Although a number of ways in which potential nuisance compounds can reveal their dark sides are discussed in the original PAINS study the nuisance behavior is not actually linked to the frequent-hitter behavior reported for compounds in the assay panel. Also, it can be safely assumed that none of the six protein-protein interaction targets of the PAINS assay panel feature a catalytic cysteine and my view is that any frequent-hitter behavior that is observed in the assay panel for ‘cysteine killers’ is more likely to be due to reaction with (or quenching of) singlet oxygen. It’s also worth pointing out that when compounds are described as exhibiting pan-assay interference (or as frequent hitters) that the relevant nuisance behavior has often been predicted (or assumed) as opposed to being demonstrated with measured data.  I would argue that even a ‘maximal PAINS response’ (the compounds is actually observed as a hit in each of the six assays of the PAINS assay panel) would not rule out the use of a compound as a chemical probe.

I have argued on cheminformatic grounds that it’s not appropriate to use PAINS filters for assessment of potential probes but there’s another reason that those seeking to set standards for chemical probes shouldn’t really be endorsing the use of PAINS filters for this purpose. “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications stresses the importance of Open Science. However, the PAINS structural alerts were trained on proprietary data and using PAINS filters to assess potential chemical probes will ultimately raise questions about the level of commitment to Open Science. I made a very similar point in my comment on the ACS assay interference editorial (Journal of Medicinal Chemistry considers the publication of analyses of proprietary data to be generally unacceptable).

Let’s take a look at “The promise and peril of chemical probes” that was published in Nature Chemical Biology in 2015. The authors state:

“We learned that many of the chemical probes in use today had initially been characterized inadequately and have since been proven to be nonselective or associated with poor characteristics such as the presence of reactive functionality that can interfere with common assay features [3] (Table 2). The continued use of these probes poses a major problem: tens of thousands of publications each year use them to generate research of suspect conclusions, at great cost to the taxpayer and other funders, to scientific careers and to the reliability of the scientific literature.”

Let’s take a look at Table 2 (Examples of widely used low-quality probes) from "The promise and peril of chemical probes". You’ll see “PAINS” in the problems column of Table 2 for two of the six low-quality probes in and this rings a number of alarm bells for me. Specifically, it is asserted that flavones are “often promiscuous and can be pan-assay interfering (PAINS) compounds” and Epigallocatechin-3-gallate is a “promiscuous PAINS compound” which raises a number of questions. Were the (unspecified) flavones and Epigallocatechin-3-gallate actually observed to be promiscuous and if so what activity threshold was used for quantifying promiscuity? Were any of the (unspecified) flavones or Epigallocatechin-3-gallate actually observed to exhibit pan-assay interference?  Were affinity and selectivity measurements actually available for the (unspecified) flavones or Epigallocatechin-3-gallate?

I’ll conclude the post by saying something about cheminformatic predictive models. First, to use a cheminformatic predictive model outside its applicability domain is a serious error (and will cast doubts on the expertise of anybody doing so). Second, predictions might (or might not) help you get to a desired end point but you’ll still need measured data to establish that you’ve got to the desired end point or that a compound is unfit for a particular purpose.  

Wednesday, 15 February 2023

Frequent-hitter behavior and promiscuity

I’ll be discussing promiscuity in this post and, if there’s one thing that religious leaders and drug discovery scientists agree on, it’s that promiscuity is a Bad Thing. In the drug discovery context compounds that bind to many targets or exhibit ‘activity’ in many assays are described as promiscuous. I first became aware that promiscuity was a practical (as opposed to a moral) problem when we started to use high-throughput screening (HTS) at Zeneca in the mid-1990s and we soon learned that not all screening output smells of roses (the precursor company ICI had been a manufacturer of dyestuffs which are selected/designed to be brightly colored and for their ability to stick to stuff).

You’ll often encounter assertions in the scientific literature that compounds are promiscuous and my advice is to carefully check the supporting evidence if you plan to base decisions on the information. In many cases, you’ll find out that the ‘promiscuity’ is actually predicted and the problem with many cheminformatic models is that you often (usually?) don’t know how predictive the model is going to be for the compounds that you’re interested in. You have to be careful basing decisions on predictions because it is not unknown for predictivity of models and strengths of trends in data to be overstated. As detailed in this article, relationships between promiscuity (defined as number of assays for which ‘activity’ exceeds a specified threshold) and physicochemical descriptors such as lipophilicity or molecular weight are made to appear rather stronger than they actually are. Scope of models may also be overstated and claims that compounds exhibit pan-assay interference have been made on the basis that the compounds share structural features with other compounds (the structures were not disclosed) that were identified as frequent-hitters in a panel of six assays that all use the AlphaScreen read-out.

The other reason that you need to be wary of statements that compounds are promiscuous is that the number of assays for which ‘activity’ exceeds a threshold increases as you make the threshold more permissive (I was actually taught about the relationship between permissiveness and promiscuity by the Holy Ghost Fathers at high school in Port of Spain). I’ve ranked some different activity thresholds by permissiveness in Figure 1 that will hopefully give you a clearer idea of what I’m getting at. In general, it is prudent to be skeptical of any claim that promiscuity using a highly permissive activity threshold (e.g., ≥ 50% response at 10 μM) is necessarily relevant in situations where the level of activity against the target of interest is much greater (e.g., IC50 = 20 nM with well-behaved concentration response and confirmed by affinity measurement in SPR assay). My own view is that compounds should only be described as promiscuous when concentration responses have been measured for the relevant ‘activities’ and I prefer to use the term ‘frequent-hitter’ when ‘activity’ is defined in terms of response in the assay read-out that exceeds a particular cut off value.

Frequent-hitter behavior is a particular concern in analysis of HTS output and an observation that a hit compound in the assay of interest also hits in a number of other assays raises questions about whether further work on the compound is justified.  In a comment on the ACS assay interference editorial, I make the point that the observation that a compound is a frequent hitter may reflect interference with read-out (which I classified as Type 1 behavior) or an undesirable mechanism of action (which I classified as Type 2 behavior). It is important to make a distinction between these two types of behavior because they are very different problems that require very different solutions. One criticism that I would make of the original PAINS study, the chemical con artists perspective in Nature and the ACS assay interference editorial is that none of these articles make a distinction between these two types of nuisance behavior.

I’ll first address interference with assay read-out and the problem for the drug discovery scientist is that the ‘activity’ is not real. One tactic for dealing with this problem is to test the hit compounds in an assay that uses a different read-out although, as described in this article by some ex-AstraZeneca colleagues, it may be possible to assess and even correct for the interference using a single assay read-out. Interference with read-out should generally be expected to increase as the activity threshold is made more permissive (this is why biophysical methods are often preferred for detection and quantitation of fragment binding) and you may find that a compound that interferes with a particular assay read-out at 10 μM does not exhibit significant interference at 100 nM. Interference with read-out should be seen as a problem with the assay rather than a problem with the compound. 

An undesirable mechanism of action is a much more serious problem than interference with read-out and testing hit compounds in an assay that uses a different read-out doesn’t really help because the effects on the target are real.  Some undesirable mechanisms of action such as colloidal aggregate formation are relatively easy to detect (see Aggregation Advisor website) but determining the mechanism of action typically requires significant effort and is more challenging when potency is low. An undesirable mechanism of action should be seen as a problem with the compound rather than a problem with the assay and my view is that this scenario should not be labelled as assay interference.

I’ll wrap up with a personal perspective on frequent-hitters and analysis of HTS output although I believe my experiences were similar to those of others working in industry at the time. From the early days of HTS at Zeneca where I worked it was clear that many compounds with ‘ugly’ molecular structures were getting picked up as hits but it was often difficult to demonstrate objectively that ugly hits were genuinely unsuitable for follow-up. We certainly examined frequent-hitter behavior although some ‘ugly’ hits were not frequent-hitters. We did use SMARTS-based substructural flags (referred to as the ‘de-crapper’ by some where I worked) for processing HTS output and we also looked at structural neighborhoods for hit structures using Flush (the lavatorial name of the software should provide some insight into how we viewed analysis of HTS output). The tactics we used at Zeneca (and later at AstraZeneca) were developed using real HTS data and I don’t think anybody would have denied that there was a subjective element to the approaches that we used.    

Wednesday, 8 February 2023

Chemical probes and permeability

<< previous || next >>

I’ll start this post by with reference to a disease that some of you many never have heard of. Chagas disease is caused by the very nasty T. cruzi parasite (not to be confused with the even nastier American politician) and is of particular interest in Latin America where the disease is endemic.  T. cruzi parasites have an essential requirement for ergosterol and, as discussed in C2010, are potentially vulnerable to inhibition of sterol 14α-demethylase (CYP51), which catalyzes the conversion of lanosterol to ergosterol.  However, the CYP51 inhibitor posaconazole (an antifungal medication) showed poor efficacy in a clinical trials for chronic Chagas disease. Does this mean that CYP51 is a bad target?  The quick answer is “maybe but maybe not” because we can’t really tell whether the lack of efficacy is due to irrelevance of the target or inadequate exposure.

We commonly invoke the free drug hypothesis (FDH) in drug design which means that we assume that the free concentration at the site of action is the same as the free plasma concentration (the term ‘free drug theory’ is also commonly used although I prefer FDH). The FDH is covered in the S2010 (see Box 1 and 2) and B2013 articles and, given that the targets of small molecule drugs tend to be intracellular, I’ll direct you to the excellent Smith & Rowland perspective on intracellular and intraorgan concentrations of drugs.  When we invoke the FDH we’re implicitly assuming that the drug can easily pass through barriers, such as the lipid bilayers that enclose cells, to get to the site of action.  In the absence of active transport, the free concentration at the site of action of a drug will tend to lag behind the free plasma concentration with the magnitude of the lag generally decreasing with permeability. Active transport (which typically manifests itself as efflux) is a more serious problem from the design perspective because it leads to even greater uncertainty in the free drug concentration at the site of action and it’s also worth remembering that transporter expression may vary with cell type. It’s worth mentioning that uncertainty in the free concentration at the site of action is even greater when targeting intracellular pathogens, as is the case for Chagas disease, malaria and tuberculosis.

Some may see chemical probes as consolation prizes in the drug discovery game and, while this may sometimes be the case, we really need to be thinking of chemical probes as things that need to be designed. As is well put in “A conversation on using chemical probes to study protein function in cells and organisms” that was recently published in Nature Communications: 

“But drugs are different from chemical probes. Drugs don’t necessarily need to be as selective as high-quality chemical probes. They just need to get the job done on the disease and be safe to use. In fact, many drugs act on multiple targets as part of their therapeutic mechanism.”

High selectivity and affinity are clear design objectives and, to some extent, optimization of affinity will tend to lead to higher selectivity.  High quality chemical probes for intracellular targets need to be adequately permeable and should is should not be subject to active transport. The problems caused by active efflux are obvious because chemical probes need to get into cells in order to engage intracellular targets but there’s another reason that adequate permeability and minimal active transport are especially important for chemical probes. In order to interpret results, you need to know the free concentration of the probe at the site of action and active transport, whether it manifests itself as efflux or influx, leads to uncertainty the intracellular free concentration. Although it may be possible to measure intracellular free concentration (see M2013) it’s fiddly to do so if you’re trying to measure target engagement at the same time and it’s not generally possible to do so in vivo. It's much better to be in a position to invoke the FDH with confidence and this point is well made in the Smith and Rowland perspective:

“Many misleading assumptions about drug concentrations and access to drug targets are based on total drug. Correction, if made, is usually by measuring tissue binding, but this is limited by the lack of homogenicity of the organ or compartment. Rather than looking for technology to measure the unbound concentration it may be better to focus on designing high lipoidal permeable molecules with a high chance of achieving a uniform unbound drug concentration.”

If the intention is to use a chemical probe for in vivo studies then you’ll need to be confident that adequate exposure at the site of action can be achieved. My view is that it would be difficult to perform a meaningful assessment of the suitability of a chemical probe for in vivo studies without relevant experimental in vivo measurements. You might, however, be able to perform informative in vivo experiments with a chemical probe in the absence of existing pharmacokinetic measurements (provided that you monitor plasma levels and know how tightly the probe is bound by plasma proteins) although you’ll still need to invoke the FDH for intracellular targets.  

If you’re only going to use a chemical probe in cell-based experiments then you really don’t need to worry about achieving oral exposure and this has implications for probe design. The requirement for a chemical probe to have acceptable pharmacokinetic characteristics imposes constraints on design (which may make it more difficult to achieve the desired degree of selectivity) while pharmacokinetic optimization is likely to consume significant resources. As is the case for chemical probes intended for in vivo use, you’ll want to be in a position to invoke the FDH.

In this post, I’ve argued that you need to be thinking very carefully about passive permeability and active transport (whether it leads to efflux or influx) when designing, using or assessing chemical probes. In particular, having experimental measurements available that show that a chemical probe exhibits acceptable passive permeability and is not actively transported will greatly increase confidence that the chemical probe is indeed fit for purpose. It’s not my intention to review methods for measuring passive permeability or active transport in this post although I’ll point you to the B2018, S2021, V2011 and X2021 articles in case any of these are helpful.

Saturday, 28 January 2023

More approaches to design of covalent inhibitors of SARS-CoV-2 main protease

<< previous |

I’ll pick up from the previous post on design covalent inhibitors of SARS-CoV-2 main protease (structure and chart numbering follows from there). As noted previously, I really think that you need to exploit conserved structural features, such as the catalytic residues and the oxyanion hole, if you’re genuinely concerned about resistance and I do consider it a serious error to make a virtue out of non-covalency. As in the previous post, I've linked designs to the original Covid Moonshot submissions whenever possible. 

I’ll kick the post off with 14 (Chart 5) which replaces a methylene in the lactam ring of 10 (Chart 4 in previous post) with oxygen. This structural transformation results in 0.8 log unit reduction in lipophilicity (at least according to the algorithm used for the Covid Moonshot) and might also simplify the synthesis.
Designs 15 and 16 (also in Chart 5) link the nitrile warhead from nitrogen rather than carbon and this structural transformation eliminates a chiral centre in each of 10 and 11 (Chart 4 in previous post) and may be beneficial for affinity (see discussion around 8 and 9 in Chart 3 of the previous post). In substituted hydrazine derivatives, the nitrogen lone pairs (or the π-systems which the nitrogens are in) tend to avoid each other and so I’d expect nitrile warheads of 15 and 16 to adopt axial orientations. I’d anticipate that the nitrile warhead will be directed toward the catalytic cysteine for 15 but away from the catalytic cysteine for 16 and I favor the former for this for this reason. It's also worth mentioning that even if the nitrile is directed away from the catalytic cysteine it may occupy the oxyanion hole.

I’ll finish with couple of designs based on aromatic sulfur that are shown in Chart 6. Design 17 was originally submitted by Vladas Oleinikovas although I’ll also link my resubmission of this design because the notes include a detailed discussion of a design rationale along with a proposed binding mode. My view is that the catalytic cysteine could get within striking distance of the ring sulfur (which can function as a chalcogen-bond donor and potentially even an electrophile). Although 2,1-benzothiazole is not obviously electrophilic, it’s worth noting that acetylene linked by saturated carbon can replace the nitrile as an electrophilic warhead (this isosteric replacement leads to irreversible inhibition as discussed in this article). I’ve also included 18 which replaces 2,1-benzothiazole with (what I’d assume is) a more electrophilic heterocycle. I would anticipate that any covalent inhibition by these compounds will be irreversible.




Wednesday, 25 January 2023

Assessment of chemical probes: response to Practical Fragments

<< previous | next >>

I had originally intended to look at permeability in this post but I do need to respond to Dan Erlanson’s post at Practical Fragments. I see Dan’s position (“everything is an artifact until proven otherwise”) as actually very similar to my position (“chemical probes will have to satisfy the same set of acceptability criteria whether or not they trigger structural alerts”) and we’re both saying that you need to perform the necessary measurements if you’re going to claim that a compound is acceptable for use as a chemical probe. Where Dan’s and my respective positions appear to diverge is that I consider structural alerts based on primary screening output (i.e., % response when assayed at a single concentration) to be of minimal value for assessment of optimized chemical probes. My comment on the “The Ecstasy and Agony of Assay Interference Compounds” editorial should make this position clear. 

Thursday, 19 January 2023

Some approaches to design of covalent inhibitors of SARS-CoV-2 main protease

<< previous | next >>

I last posted on Covid-19 early in 2021 and quite a lot has happened since then. Specifically, a number of vaccines are now available (I received my first dose of AstraZeneca CoviShield in May 2021 while still stranded in Trinidad) and paxlovid has been approved for use as a Covid-19 treatment (Derek describes his experiences taking paxlovid in this post).  The active ingredient of paxlovid is the SARS-CoV-2 main protease inhibitor nirmatrelvir and the ritonavir with which it is dosed serves only to reduce clearance of nirmatrelvir by inhibiting metabolic enzymes. In the current post, I’ll be looking at covalent inhibition of SARS-CoV-2 main protease with a specific focus on reversibility and here are some notes that I whipped up as a contribution to the Covid Moonshot.

Nirmatrelvir (1) is shown in Chart 1 along with SARS-CoV-2 main protease inhibitors from the Covid Moonshot (2), a group of (mainly) Sweden-based academic researchers (3) and Yale University (4).  Nirmatrelvir incorporates a nitrile group that forms a covalent bond with the catalytic cysteine and the other inhibitors bind non-covalently to the target. The first example of a nitrile-based cysteine protease inhibitor that I’m aware of was published over half a century ago and the nitrile warhead has since proved popular with designers of cysteine protease inhibitors (it has a small steric footprint and is not generally associated with metabolic lability or chemical instability). Furthermore, covalent bond formation between the thiol of a catalytic cysteine and the carbon of the nitrile warhead is typically reversible. Here’s a recent review on the nitrile group in covalent inhibitor design and this comparative study of electrophilic warheads may also be of interest.

At this point, we should be thinking about the directions in which design of SARS-CoV-2 main protease inhibitors needs to go. Two directions I see as potentially productive are dose reduction (a course of paxlovid treatment consists of two 150 mg nirmatrelvir tablets and one 100 mg ritonavir tablet taken twice daily for five days) and countering resistance (here’s a relevant article).

Two tactics for achieving a lower therapeutic dose are to increase affinity and reduce clearance. Dose prediction is not as easy as you might think because the predictions are typically very sensitive to input parameters. For example, a two-fold difference in IC50 would often be regarded as within normal assay variation by medicinal chemists but development scientists and clinicians would view doses of 300 mg and 600 mg very differently. 

Excessive clearance is a problem from the perspective of achieving adequate exposure and I'd also anticipate greater variability in exposure between patients when clearance is high. Clearance is clearly an issue for nirmatrelvir because it needs be co-dosed with ritonavir (to inhibit metabolic enzymes) and this has implications for patients taking other medications. Nirmatrelvir lacks aromatic rings and deuteration is an obvious tactic to reduce metabolic lability (although cost of goods is likely to be more of an issue than for a cancer medicine that you'll need to take out a second mortgage for). I would anticipate that bicyclo[1.1.1]pentanyl will be less prone to metabolism than t-butyl (CH bonds tend to be stronger in strained rings and for bridgehead CHs) and the binding mode suggests that this replacement could be accommodated. 

Details of resistance to nirmatrelvir (P2022 | Z2022) are starting to emerge and this information should be certainly be used in design and to assess other structural series. Nevertheless, if you’re genuinely concerned about potential for resistance then you really can’t afford to ignore conserved structural features in the target such as the catalytic residues (cysteine and histidine) and the oxyanion hole. I would also anticipate that the risk of resistance will increase with the spatial extent of the inhibitor.

This post is about covalent inhibitors. Although I’m pleasantly surprised by the potencies achieved for non-covalent SARS-Cov-2 Main Protease inhibitors, I consider making a virtue of non-covalent inhibition to be a serious error. Binding of covalent inhibitors to their targets can be reversible  or irreversible and, in the context of design, reversible covalent inhibitors have a lot more in common with non-covalent inhibitors than with irreversible covalent inhibitors (for example, you can't generally use mass spectroscopy to screen covalent fragments that bind reversibly). In the context of drug design, covalent bonds have much more stringent geometric requirements than non-covalent interactions such as hydrogen bonds.   

I generally favor reversible binding when targeting catalytic cysteines as discussed in these notes and this article. It is typically less difficult to design reversible covalent inhibitors to target a catalytic cysteine than it is to design irreversible covalent inhibitors because you can use crystal structures of protein-ligand complexes just as you would for non-covalent inhibitors. In contrast, the crystal of a protein-ligand complex (the reaction ‘product’) is not especially relevant in design of irreversible inhibitors because target engagement is under kinetic rather than thermodynamic control and the more relevant transition state models must therefore be generated computationally. Furthermore, assays for irreversible inhibitors are more complex, and assessment of functional selectivity and safety is more difficult than for reversible inhibitors. All that said, however, I’m certainly not of the view that irreversible inhibitors are inherently inferior to reversible inhibitors for targeting catalytic cysteines. This is also a good point to mention an article which shows how isosteric replacement (with an alkyne) of the nitrile warhead of the reversible cathepsin K inhibitor odanacatib results in an irreversible inhibitor (the article is particularly relevant if you’re interested in chemical probes for cysteine proteases).

I contributed some designs for reversible covalent inhibitors to the Covid Moonshot and it may be helpful to discuss some of them. Each design was intended to link the nitrile warhead to the ‘3-aminopyridine-like’ scaffold used in the Covid Moonshot which means that the designs all use a heteroaromatic P1 group (typically isoquinoline linked at C4) rather than the chiral P1 group (pyrrolidinone linked at C3) used for nirmatrelvir and a number of other SARS-CoV-2 main protease inhibitors. The ‘3-aminopyridine-like’ scaffold lacks essential hydrogen bond donors (elimination of hydrogen bond donors is suggested as a tactic for increasing aqueous solubility in this article). One of the cool things about the way the Covid Moonshot was set up is that I can link designs as they were originally submitted (often with a detailed rationale and proposed binding mode).

The most direct way to link a nitrile to the ‘3-aminopyridine-like’ scaffold is with methylene (5, Chart 2) but there is a problem with this approach because substituting anilides (and their aza-analogs) on nitrogen with sp3 carbon inverts the cis/trans geometrical preference of the anilides (I discussed the design implications of this in these notes).  This implies that binding of 5 to the target is expected to incur a conformational energy penalty and it is significant that N-methylation of 6 results in a large reduction in potency. Although 5 was inactive in the enzyme inhibition assay, I think that it would still be worth seeing if covalent bond formation can be observed by crystallography for this compound.

However, you won’t invert cis/trans geometrical preference if you substitute an anilide nitrogen with nitrogen rather than sp3 carbon (Chart 3). This was the basis for submitting 8, which is related to azapeptide nitriles, as a design.  Azapeptide nitriles [L2008 | Y2012 | L2019 | B2022] are typically more potent than the corresponding peptide nitriles and, to be honest, this remains something of a mystery to me (one possibility is that the imine nitrogen of the azapeptide nitrile adduct is more basic than that of the corresponding peptide nitrile adduct and is predominantly protonated under assay conditions). I see cyanohydrazines and cyanamides as functional groups that would be worth representing in fragment libraries if you want to target catalytic cysteine residues and I’ll point you toward a relevant crystal structure. The acyclic hydrazine and cyanamide substructures in 8 trigger structural alerts although there are approved drugs that incorporate acyclic hydrazine (atazanavir | bumadizone | gliclazidegoserelin | isocarboxazid | isoniazid) and N-cyano (cimetidine) substructures. The basis for these structural alerts is obscure and it’s worth noting that 8 is incorrectly flagged as an enamine and having a nitrogen-oxygen single bond. As a cautionary tale on structural alerts, I’ll refer you to this comment in which I read the riot act (i.e., the JMC guidelines for authors) to a number of ACS journal EiCs Nevertheless, I’d still worry about the presence of an acyclic hydrazine substructure although these concerns would be eased if each nitrogen atom was bonded to an electron-withdrawing group, as is the case for 8, and all NHs were capped (see 9).


An alternative tactic to counter inversion of the cis/trans geometrical preference is to lock the conformation with a ring and designs 10 and 11 (Chart 4) can be seen as 'hybrids' of 5 with 12 and 13 respectively (in fragment-based design, hybridization is usually referred to as fragment merging). The effect of the conformational lock can be clearly seen since 12 and 13 are essentially equipotent with 6 (the primary reason for proposing 12 and 13 as designs was actually to present the nitrile warhead to the catalytic cysteine). A substituent on carbon next to a lactam nitrogen tends to adopt an axial orientation and I’d anticipate that 10 will be less prone to epimerization than 11. Although I'm unaware of nitriles being deployed on cyclic amine substructures for cysteine protease inhibition, the structures of the DPP-4 inhibitors saxagliptin and vildagliptin are relevant.


This is a good point at which to wrap up. If cysteine protease inhibition is a key component of pandemic preparedness strategy then you really do need to be thinking about covalent inhibition.  I'll be looking at some more design themes for covalent inhibitors of SARS-CoV-2 in the next Covid post.