Sunday, 9 August 2020

How not to repurpose a 'drug'

<< previous || next >>

I sometimes wonder what percentage of the pharmacopoeia will have been proposed for repurposing for the treatment of COVID19 by the end of 2020. In particular, I worry about the long-term, psychological effects on bloggers such as Derek who is forced to play whack-a-mole with hydroxychloroquine repurposing studies. Those attempting to use text mining and machine learning to prioritize drugs for repurposing should take note of the views expressed in this tweet

The idea behind drug repurposing is very simple. If an existing drug looks like it might show therapeutic benefits in the disease that you’re trying to treat then you can go directly to assessing efficacy in humans without having to do any of those irksome Phase I studies. However, you need to be aware that the approval of a drug always places restrictions on the dose that you can use and route of administration (for example, you can't administer a drug intravenously if it has only been approved for oral adminstration). One rationale for drug repurposing is that the target(s) for the drug may also have a role in the disease that you’re trying to treat. Even if the target is not directly relevant to the disease, the drug may engage a related target that is relevant with sufficient potency to have a therapeutically exploitable effect. While these rationales are clear, I do get the impression that some who use text-mining and machine learning to prioritize drugs for repurposing may simply be expecting the selected drugs to overwhelm targets with druglikeness. 

There are three general approaches to directly tackle a virus such as SARS-CoV-2 with a small molecule drug (or chemical agent). First, destroy the virus before it even sees a host cell and this is the objective of hand-washing and disinfection of surfaces. Second, prevent the virus from infecting host cells, for example, by blocking the interaction between the spike protein and ACE2. Third, prevent the virus from functioning in infected cells, for example, by inhibiting the SARS-CoV-2 main protease. One can also try to mitigate the effects of viral infection, for example, by using anti-inflammatory drugs to counter cytokine storm although I’d not regard this as tackling the virus directly.

In this post, I’ll be reviewing an article which suggests that quaternary ammonium compounds could be repurposed for treatment of COVID-19. The study received NIH funding and this may be of interest to researchers who failed to secure NIH funding. The article was received on 06-May-2020, accepted on 18-May-2020 and published on 25-May-2020. One of the authors of the article is a member of the editorial advisory board of the journal. As of 08-Aug-2020, two of the authors are described as cheminformatics experts in their Wikipedia biographies and one is also described as an expert in computational toxicology. 

The authors state: “This analysis identified ammonium chloride, which is commonly used as a treatment option for severe cases of metabolicalkalosis, as a drug of interest. Ammonium chloride is a quaternary ammonium compound that is known to also have antiviral activity (13,14) against coronavirus (Supplementary Material) and has a mechanism of action such as raising the endocytic and lysosomal pH, which it shares with chloroquine (15). Review of the text-mined literature also indicated a high-frequency of quaternary ammonium disinfectants as treatments for many viruses (Supplementary Material) (16,17), including coronaviruses: these act by deactivating the protective lipid coating that enveloped viruses like SARS-CoV-2 rely on.” 

Had I described ammonium chloride as a “quaternary ammonium compound” at high school in Trinidad (I was taught by the Holy Ghost Fathers), I’d have received a correctional package of licks and penance. For cheminformatics ‘experts’ to make such an error should remind us that each and every expert has an applicability domain and a shelf life. However, the errors are not confined to nomenclature since the cationic nitrogen atoms of a quaternary ammonium compound and a protonated amine are very different beasts. While a protonated amine can deprotonate in order to cross a lipid bilayer, the positive charge of a quaternary ammonium compound can be described as ‘permanent’ and this has profound consequences for its physicochemical behavior. First, the protonation state of a quaternary ammonium nitrogen does not change in response to a change in pH. This means that, unlike amines, quaternary ammonium compounds are not drawn into lysosomes and other acidic compartments. Second, the positive charge needs to be balanced by an anion (in some cases, this may be in the same covalent framework as the quaternary ammonium nitrogen). Despite being positively charged, the quaternary ammonium group is not as polar as you might think because it can’t donate hydrogen bonds to water. However, to get out of water it needs to take its counterion (which is typically polar) with it. I like to think about quaternary ammonium compounds (and other permanent cations) as hydrophobic blobs that are held in solution by the solvation of their counterions. A typical quaternary ammonium compound can also be considered as a detergent in which the polar and non-polar parts are not covalently bonded to each other. 

My view is that the antiviral ‘activity’ reported for ammonium chloride and chloroquine is a red herring when considering potential antiviral activity of quaternary ammonium compounds because neither has a quaternary ammonium center in its molecular structure. Nevertheless, I consider “raising the endocytic and lysosomal pH” to be an unconvincing ‘explanation’ for the antiviral ‘activity’ of ammonium chloride and chloroquine since one would anticipate analogous effects for any base of comparable pKa. One should also anticipate considerable collateral damage to result from raising the endocytic and lysosomal pH (assuming that the ‘drug’ is able overwhelm the buffering systems that have evolved to maintain physiological pH in live humans). The pH raising ‘explanation’ for antiviral ‘activity’ reminded me of suggestions that cancer can be cured by drinking aqueous sodium bicarbonate and I’ll direct readers to this relevant post by Derek. 

This brings us to cetylpyridinium chloride and miramistin shown below and I’ve included the structure of paraquat in the graphic. While miramistin does indeed have a quaternary ammonium nitrogen in its molecular structure, cetylpyridinium chloride is not a quaternary ammonium compound (the cationic nitrogen is only connected to three atoms) and would be more correctly referred to as an N-alkylpyridinium compound (or salt). Nevertheless, this is a less serious error than describing ammonium chloride as a quaternary ammonium compound because cetylpyridinium is, at least, a permanent cation. Neither cetylpyridinium chloride nor miramistin are quite as clean as the authors might have you believe (take a look at L1991 | L1996 | D2017 | K2020 | P2020). I’d expect an N-alkylpyridinium cation to be more electrophilic than a tetraalkylammonium cation and paraquat, with two N-alkylpyridinium substructures is highly toxic. Would Lady Bracknell's toxicity assessment have been that one N-alkylpyridinium may be regarded as a misfortune while two looks like carelessness?
I have no problem with hypothesizing that a chemical agent, such as cetypyridinium chloride, which destroys SARS-CoV-2 on surfaces could do the same thing safely when sprayed up your nose, into your mouth or down your throat. If tackling the virus in this manner, you do need to be thinking about the effects of the chemical agent on the mucus (which is believed to protect against viral infection). The authors assert that cetylpyridinium chloride “has been used in multiple clinical trials” although they only cite this study in which it was used in conjunction with glycerin and xanthan gum (claimed by the authors of the clinical study to “form a barrier on the host mucosa, thus preventing viral contact and invasion”).

The main challenge to a proposal that cetylpyridinium chloride be repurposed for treatment of COVID-19 is that the compound does not appear to have actually been conventionally approved (i.e. shown to be efficacious and safe) as a drug for dosing as a nasal spray, mouth wash or gargle. Another difficulty is that cetylpyridinium chloride does not appear to have a specific molecular target. Something that should worry readers of the article is that the authors make no reference to literature in which potential toxicity of cetylpyridinium chloride and quaternary ammonium compounds is discussed.

This is a good place to wrap up and, here in Trinidad's Maraval Valley, I'm working a cure for COVID-19. I anticipate a phone call from Stockholm later in the year.

Sunday, 2 August 2020

Why fragments?

Paramin panorama

Crystallographic fragment screens have been run recently against the main protease (at Diamond) and the Nsp3 macrodomain (at UCSF and Diamond) of SARS-Cov-2 and I thought that it might be of interest to take a closer look at why we screen fragments. Fragment-based lead discovery (FBLD) actually has origins in both crystallography [V1992 | A1996] and computational chemistry [M1991 | B1992 | E1994]. Measurement of affinity is important in fragment-to-lead work because it allows fragment-based structure-activity relationships to be established prior to structural elaboration. Affinity measurement is typically challenging when fragment binding has been detected using crystallography although affinity can be estimated by observation of the response of occupancy to concentration (the ∆G° value of −3.1 kcal/mol reported for binding of pyrazole to protein kinase B was derived in this manner).

Although fragment-based approaches to lead discovery are widely used, it is less clear why fragment-based lead discovery works as well as it appears to. While it has been stated that “fragment hits form high-quality interactions with the target”, the concept of interaction quality is not sufficiently well-defined to be useful in design. I ran a poll which asked about the strongest rationale for screening fragments.  The 65 votes were distributed as follows: ‘high ligand efficiency’ (23.1%), ‘enthalpy-driven binding’ (16.9%), ‘low molecular complexity’ (26.2%) and ‘God loves fragments’ (33.8%). I did not vote.

The belief is that fragments are especially ligand-efficient has many adherents in the drug discovery field and it has been asserted that “fragment hits typically possess high ‘ligand efficiency’ (binding affinity per heavy atom) and so are highly suitable for optimization into clinical candidates with good drug-like properties”. The fundamental problem with ligand efficiency (LE), as conventionally calculated, is that perception of efficiency varies with the arbitrary concentration unit in which affinity is expressed (have you ever wondered why Kd , Ki or IC50 has to be expressed in mole/litre for calculation of LE?). This would appear to be an rather undesirable characteristic for a design metric and LE evangelists might consider trying to explain why it’s not a problem rather than dismissing it as a “limitation” of the metric or trying to shift the burden of proof is onto the skeptics to show that the evangelists’ choice of concentration unit for calculation of LE is not useful.

The problems associated with the arbitrary nature of the concentration unit used to express affinity were first identified in 2009 and further discussed in 2014 and 2019. Specifically, it was noted that LE has a nontrivial dependency on the concentration,  C°, used to define the standard state. If you want to do solution thermodynamics with concentrations defined then you do need to specify a standard concentration. However, it is important to remember that the choice of standard concentration is necessarily arbitrary if the thermodynamic analysis is to be valid. If your conclusions change when you use a different definition of the standard state then you’ll no longer be doing thermodynamics and, as Pauli might have observed, you’ll not even be wrong. You probably don't know it, but when you use the LE metric, you’re making the sweeping assumption that all values of Kd, Ki and IC50 tend to a value of 1 M in the limit of zero molecular size. Recalling the conventional criticism of homeopathy, is there really a difference between a solute that is infinitely small and a solute that is infinitely dilute?

I think that’s enough flogging of inanimate equines for one blog post so let’s take a look at enthalpy-driven binding. My view of thermodynamic signature characterization in drug discovery is that it’s, in essence, a solution that’s desperately seeking a problem. In particular, there does not appear to be any physical basis for claims that the thermodynamic signature is a measure of interaction quality.  In case you’re thinking that I’m an unrepentant Luddite, I will concede that thermodynamic signatures could prove useful for validating physics-based models of molecular recognition and in, in specific cases, they may point to differences in binding mode within congeneric series. I should also stress that the modern isothermal calorimeter is an engineering marvel and I'd always want this option for label-free, affinity measurement in any project.

It is common to see statements in the thermodynamic signature literature to the effect that binding is ‘enthalpy-driven’ or ‘entropy-driven’ although it was noted in 2009 (coincidentally, in the same article that highlighted the nontrivial dependence of LE on C°) that these terms are not particularly meaningful. The problems start when you make comparisons between the numerical values of ∆H (which is independent of C°) and T∆S° (which depends on C°). If I’d presented such a comparison in physics class at high school (I was taught by the Holy Ghost Fathers in Port of Spain), I would have been caned with a ferocity reserved for those who’d dozed off in catechism class.  I’ll point you toward an article which asserts that, “when compared with many traditional druglike compounds, fragments bind more enthalpically to their protein targets”. I have a number of issues with this article although this is not the place for a comprehensive review (although I’ll probably pick it up in ‘The Nature of Lipophilic Efficiency’ when that gets written).

While I don’t believe that the authors have actually demonstrated that fragments bind more enthalpically than ligands of greater molecular size, I wouldn’t be surprised to discover that gains in affinity over the course of a fragment-to-lead (F2L) campaign had come more from entropy than enthalpy. First, the lost translation entropy (the component of ∆S° that endows it with its dependence on C°) is shared over greater number of intermolecular contacts for structurally-elaborated compounds and this article is relevant to the discussion. Second, I’d expect the entropy of any water molecule to increase when it is moved to bulk solvent from contact with molecular surface of ligand or target (regardless of polarity of the molecular surface at the point of contact). Nevertheless, this is something that you can test easily by examining the response of (∆H + T∆S°) to ∆G° (best to not to aggregate data for different targets and/or temperatures when analyzing isothermal titration calorimetry data in this manner). But even if F2L affinity gains were shown generally to come more from entropy than enthalpy, would that be a strong rationale for screening fragments?

This gets us onto molecular complexity and this article by Mike Hann and GSK colleagues should be considered essential reading for anybody thinking about selecting of compounds for screening. The Hann model is a conceptual framework for molecular complexity but it doesn’t provide much practical guidance as to how to measure complexity (this is not a criticism since the thought process should be more about frameworks and less about metrics). I don’t believe that it will prove possible to quantify molecular complexity in an objective manner that is useful for designing compound libraries (I will be delighted to be proven wrong on this point). The approach to handling molecular complexity that I’ve used in screening library design is to restrict extent of substitution (and other substructural features that can be considered to be associated with molecular complexity) and this is closer to ‘needle screening’ as described by Roche scientists in 2000 than to the Hann model.

Had I voted in the poll, ‘low molecular complexity’ would have got my vote.  Here’s what I said in NoLE (it’s got an entire section on fragment-based design and a practical suggestion for redefining ligand efficiency so that perception does not change with C°):

"I would argue that the rationale for screening fragments against targets of interest is actually based on two conjectures. First, chemical space can be covered most effectively by fragments because compounds of low molecular complexity [18, 21, 22] allow TIP [target interaction potential] to be explored [70,71,72,73,74] more efficiently and accurately. Second, a fragment that has been observed to bind to a target may be a better starting point for design than a higher affinity ligand whose greater molecular complexity prevents it from presenting molecular recognition elements to the target in an optimal manner."

To be fair, those who advocate the use of LE and thermodynamic signatures in fragment-based design do not deny the importance of molecular complexity. Let’s assume for the sake of argument that interaction quality can actually be defined and is quantified by the LE value and/or the thermodynamic signature for binding of compound to target. While these are massive assumptions, LE values and thermodynamic signatures are still effects rather than causes.

The last option for poll was ‘God loves fragments’ and more respondents (33.8%) voted for this than any of the first three options. I would interpret a vote for ‘God loves fragments’ in three ways. First, the respondent doesn’t consider any one of the first three options to be a stronger rationale for screening fragments than the other two. Second, the respondent doesn’t consider any of the first three options to be a valid rationale for screening fragments. Third, the respondent considers fragment-based approaches to have been over-sold.

This is a good place to wrap up. While I remain an enthusiast for fragment-based approaches to lead discovery, I do also believe that they have been somewhat oversold. The sensitivity of LE evangelists to criticism of their metric may stem from the use of LE to sell fragment-based methods to venture capitalists and, internally, to skeptical management. A shared (and serious) deficiency in the conventional ways in which LE and thermodynamic signature are quantified is that perception changes when the arbitrary concentration,  C°, that defines the standard state is changed. While there are ways in which this deficiency can be addressed for analysis, it is important that the deficiency be acknowledged if we are to move forward. Drug design is difficult and if we, as drug designers, embrace shaky science and flawed data analysis then those who fund our activities may conclude that the difficulties that we face are of our own making.