Friday, 1 April 2016

LELP metric validated

So I was wrong all along about LELP.  

I now have to concede that the LELP metric actually represents a seminal contribution to the science of drug design. Readers of this blog will recall our uncouth criticism of LELP which, to my eternal shame, I must now admit is actually the elusive, universal metric that achieves simultaneous normalization (and renormalization) of generalized interaction potential with respect to the size-scaled octanol/water partition function.

What changed my view so radically? Previously we observed that LELP treats the ADME risk associated with logP of 1 and 75 heavy atoms as equivalent to that associated with logP of 3 and 25 heavy atoms. Well it turns out that I was using the rest mass of the hydrogen atom to make this comparison which unequivocally invalidates the criticism of what turns out to be the most fundamental (and beautiful) of all the ligand efficiency metrics.

It is not intuitively obvious why relativistic correction is necessary for the correct normalization of affinity with respect to both molecular size and lipophilicity. However, I was fortunate to receive to receive a rare copy of the seminal article in the Carpathian Journal of Thermodynamics by T. T. Macoute, O. B. Ya and B. Samedi. The math is quite formidable and is based on convergence characteristics of the non-linear response to salt concentration of the Soucouyant Tensor, following application of the Douen p-Transform. B. Samedi is actually better known for his even more seminal study (with A. Bouchard) of the implications for cognitive function of the slow off-rate of tetrodotoxin in its dissociation from Duvalier's Tyrosine Kinase (DTK).

So there you have it. Ignore all false metrics and use LELP with confidence in your Sacred Quest for the Grail.  

Thursday, 10 March 2016

Ligand efficiency beyond the rule of 5


One recurring theme in this blog is that the link between physicochemical properties and undesirable behavior of compounds in vivo may not be as strong as property-based design 'experts' would have us believe. To be credible, guidelines for drug discovery need to reflect trends observed in relevant, measured data and the strengths of these trends tells you how much weight you should give to the guidelines. Drug discovery guidelines are often specified in terms of metrics, such as Ligand Efficiency (LE) or property forecast index (PFI), and it is important to be aware that every metric encodes assumptions (although these are rarely articulated).

The most famous set of guidelines for drug discovery is known as the rule of 5 (Ro5) which is essentially a statement of physicochemical property distributions for compounds that had progressed at least as far as as Phase II at some point before the Ro5 article was published in 1997. It is important to remember (some 'experts' have short memories) that Ro5 was originally presented as a set of guidelines for oral absorption. Personally, I have never regarded Ro5 as particularly helpful in practical lead optimization since it provides no guidance as to how suboptimal ADMET characteristics of compliant compounds can be improved. Furthermore, Ro5 is not particularly enlightening with respect to the consequences of straying out the allowed region and into 'die Verbotenezone'.

Nobody reading this blog needs to be reminded that drug discovery is an activity that has been under the cosh for some time and a number of publications ( 1 | 2 | 3 | 4 ) examine potential opportunities outside the chemical space 'enclosed' by Ro5. Given that drug-likeness is not the secure concept that those who claim to be leading our thoughts would have us believe, I do think that we really need to be a bit more open minded in our views as to the regions of chemical space in which we are prepared to work. That said, you cannot afford to perform shaky analysis when proposing that people might consider doing things differently because that will only hand a heavy cudgel to the roundheads for them to beat you with.

The article that I'll be discussing has already been Pipelined and this post has a much narrower focus than Derek's post. The featured study defines three regions of chemical space:  Ro5 (rule of 5), eRo5 (extended rule of 5) and bRo5 (beyond rule of 5). The authors note that "eRo5 space may be thought of as a buffer zone between Ro5 and bRo5 space".  I would challenge this point because there is a region (MW less than 500 Da and ClogP between 5 and 7.5) between Ro5 and bRo5 spaces that is not covered by the eRo5 specifications. As such, it is not meaningful to compare properties of eRo5 compounds with properties of Ro5 or bRo5 compounds. The authors of featured article really do need to fix this problem if they're planning to carve out niche in this area of study because failing to do so will make it easier for conservative drug-likeness 'experts' to challenge their findings. Problems like this are particularly insidious because the activation barriers for fixing them just keep getting higher the longer you ignore them.   

But enough of Bermuda Triangles in the space between Ro5 and bRo5 because the focus of this post is ligand efficiency and specifically its relevance (or otherwise) to bRo5 cmpounds. I'll write a formula for generalized LE is a way that makes it clear that DG° is a function of temperature, pressure and the standard concentration: 


LEgen = -DG°(T,p,C°)/HA

When LE is calculated it is usually assumed that C° is 1 M although there is nothing in the original definition of LE that says this has to be so and few, if any, users of the metric are even aware that they are making the assumption. When analyzing data it is important to be aware of all assumptions that you're making and the effects that making these assumptions may have on the inferences drawn from the analysis.

Sometimes LE is used to specify design guidelines.  For example we might assert that acceptable fragment hits must have LE above a particular cutoff. It's important to remember that setting a cutoff for LE is equivalent to imposing an affinity cutoff that depends on molecular size. I don't see any problem with allowing the affinity cutoff to increase with molecular size (or indeed lipophilicity) although the response of the cutoff to molecular size should reflect analysis of measured data (rather than noisy sermons of self-appointed thought-leaders). When you set a cutoff for LE, you're assuming (whether or not you are aware of it) that the affinity cutoff is a line that intersects the affinity axis at a point corresponding to Kof 1 M. Before heading back to bRo5, I'd like you to consider a question. If you're not comfortable setting an affinity cutoff as a function of molecular size would you be comfortable setting a cutoff for LE?

So let's take a look at what the featured article has to say about affinity: 


"Affinity data were consistent with those previously reported [44] for a large dataset of drugs and drugs in Ro5, eRo5 and bRo5 space had similar means and distributions of affinities (Figure 6a)"

So the article is saying that, on average, bRo5 compounds don't need to be of higher affinity than Ro5 compounds and that's actually useful information. One might hypothesize that unbound concentrations of bRo5 compounds tend to be lower than for Ro5 compounds because the former are less drug-like and precisely the abominations that MAMO (Mothers Against Molecular Obesity) have been trying to warn honest, god-fearing folk about for years. If you look at Figure 6a in the featured article, you'll see that the mean affinity does not differ significantly between the three categories of compound. Regular readers of this blog will be well aware that that categorizing continuous data in this manner tends to exaggerate trends in data. Given that the authors are saying that there isn't a trend, correlation inflation is not an issue here. 

Now look at Figure 6b. The authors note:


"As the drugs in eRo5 and bRo5 space are significantly bigger than Ro5 drugs, i.e., they have higher molecular weights and more heavy atoms, their LE is significantly lower"

If you're thinking about using these results in your own work, you really need to be asking whether or not the results provide any real insight (i.e. something beyond the the trivial result that 1/HA gets smaller when HA gets larger? This would also be a good time to think very carefully about all the assumptions you're going to make in your analysis. The featured article states:

"Ligand efficiency metrics have found widespread use;[45however, they also have some limitations associated with their application, particularly outside traditional Ro5 drug space. [46We nonetheless believe it is useful to characterize the ligand efficiency (LE) and lipophilic ligand efficiency (LLE) distributions observed in eRo5 and bRo5 space to provide guides for those who wish to use them in drug development"

Given that I have asserted that LE is not even wrong and have equated it with homeopathy, I'm not sure that I agree with sweeping LE's probems under the carpet by making a vague reference to "some limitations".  Let's not worry too much about trivial details because declaring a ligand efficiency metric to be useful is a recognized validation tool (even for LELP which appears have jumped straight from the pages of a Mary Shelley novel).  There is a rough analogy with New Math where "the important thing is to understand what you're doing rather to get the right answer" although that analogy shouldn't be taken too far because it's far from clear whether or not LE advocates actually understand what they are doing. As an aside, New Math is what inspired "the rule of 3 is just like the rule of 5 if you're missing two fingers" that I have occasionally used when delivering harangues on fragment screening library design.

So let's see what happens when one tries to set an LE threshold for for bRo5 compounds. The featured article states:

"Instead, the size and flexibility of the ligand and the shape of the target binding site should be taken into account, allowing progression of compounds that may give candidate drugs with ligand efficiencies of ≥0.12 kcal/(mol·HAC), a guideline that captures 90% of current oral drugs and clinical candidates in bRo5 space"

So let's see how this recommended LE threshold of 0.12 kcal/(mol.HA) translates to affinity thresholds for compounds with molecular weights of 700 Da and 3000 Da. I'll assume a temperature of 298 K and C° of 1 M when calculating DG°and will use 14 Da/HA to convert  molecular weight to heavy atoms.  I'll conclude the post by asking you to consider the following two questions? 


  • The recommended LE threshold transforms to pKD threshold of 4.4 at 700 Da. When considering progression of compounds that may give candidate drugs, would you consider a recommendation that KD should be less than 40 mM to be useful?



  • The recommended LE threshold transforms to a pKD threshold of 19 at 3000 Da. How easy do you think it would be to measure a pKD value of 19? When considering progression of compounds that may give candidate drugs, would you consider a recommendation that  pKD be greater than 19 to be useful?

      

Monday, 7 March 2016

On Sci-Hub

Many readers will have heard of Sci-Hub which makes almost 50 million copyrighted journal articles freely available. Derek has blogged about Sci-Hub and has also suggested that it might not matter as much as some think that it does. Readers might also want to take a look at some other posts ( 1 | 2 | 3 ) on the topic. I'll focus more on some of the fallout that might result from Sci-Hub's activities but won't be expressing an opinion as to who is right and who is wrong. Briefly, one side says that knowledge should be free, the other side says that laws have been broken. I'll leave it to readers to decide for themselves which side they wish to take because nothing I write is likely to change people's views on this subject. 

Sci-Hub and its creatrix are based in Russia and, given the current frosty relations between Russia and the countries which host the aggrieved journal publishers, it is safe to assume that Sci-Hub will be able to thumb its nose at the those publishers for the foreseeable future.  Sci-Hub relies relies on helpers to provide it with access to to the copyrighted material and these helpers presumably do this by making their institutional subscription credentials available to Sci-Hub. It's worth noting that one usually accesses copyrighted material through a connection that is recognized by the publisher and only a very small number of people at an institution actually know the access keys/passwords. One important question is whether or not publishers can trace the PDFs supplied by Sci-Hub. I certainly recall seeing PDFs from certain sources being marked with the name of the institution and date of download so I don't think that one can safely assume that no PDF is traceable. If a publisher can link a PDF supplied by Sci-Hub to a particular institution then presumably the publisher could sue the institution because providing third parties with access is specifically verboten by most (all?) subscription contracts. An institution facing a legal challenge from a publisher would be under some pressure to identify the leaks and publishers would be keen for some scalps pour encourager les autres.

While it would be understatement to say that the publishers are pissed off that Sci-Hub has managed to 'liberate' almost 50 million copyrighted journal articles, it is not clear how much lasting damage has been done.  The fee for downloading an article to which one does not have subscription access is typically in the range $20 to $50 but my guess is that only a tiny proportion of publishers' revenues comes from these downloads. I actually think the publishers set the download fees to provide institutions with the incentive to purchase subscriptions rather than to generate revenue from pay-per-view. If this is the case, Sci-Hub will only do real damage to the publishers if, by continuing to operate, it causes institutions to stop subscribing or helps them to negotiate cheaper subscriptions.

There is not a lot that the publishers can do about the material that Sci-Hub already has in its possession but there are a number of tactics that they might employ in order to prevent further 'liberation' of copyrighted material. I don't know if it is possible to engineer a finite lifetime into PDF files but they can be protected with passwords and publishers may try to only allow a small number of individuals at each institution to direct access the copyrighted material as PDF files. Alternatively the publishers might require that individual users create accounts and change passwords regularly in order to make it more difficult (and dangerous) for Sci-Hub's helpers to share their access. Countermeasures put in place by publishers to protect content are likely to add complexity to the process of accessing that content. This in turn would make it more difficult to mine content and the existence (and scale) of Sci-Hub could even be invoked as a counter to arguments that the right to read is the right to mine.

Given that almost 50 million articles are freely available on Sci-Hub, one might consider potential implications for Open Access (OA). There is a lot of heated debate about OA although the issues are perhaps not as clear cut as OA advocates would have you believe and this theme was explored in a post from this blog last year.  Although there is currently a lot of pressure to reduce the costs of subscriptions, it is difficult to predict how far Sci-Hub will push subscription-based journal publishers towards a purely OA business model. For example, we may see scientific publication moving towards a 'third way' in the form of pre-publication servers with post-publication peer review. I wouldn't be surprised to learn that 'direct to internet' has usurped both subscription-based and OA scholarly publishing models twenty years from now. That, however, is going off on a tangent and, to get things back on track, I'd like you to think of Sci-Hub from the perspective of an author who has paid a subscription-based journal $2000 to make an article OA. Would it be reasonable for this author to ask for a refund?       

   

Monday, 29 February 2016

The boys who cried wolf

So it's back to blogging and it's taken a bit longer to get into it this year since I had to finish a few things before leaving Brazil. This is a long post so make sure to have some strong coffee to

This post features an article, 'Molecular Property Design: Does Everyone Get It?' by two unwitting 'collaborators' in our correlation inflation Perspective. There are, however, a number of things that the authors of this piece just don't 'get' which makes their choice of title particularly unfortunate.  The first thing that they don't 'get' is that doing questionable data analysis in the past means that people in the present are less likely to heed your warnings about the decline in quality of compounds in today's pipelines. As has been pointed out more than once by this blog, rules/guidelines in drug discovery are typically based on trends observed in measured data and the strength of the trend tells you how rigidly you should adhere to the rule/guideline. Correlation inflation (see also voodoo correlations) is a serious problem in drug discovery because it causes drug discovery scientists to to give more weight to rules/guidelines (and 'expert' opinion) than is justified by the data. In drug discovery, we need to make a distinction between what we believe and what we know. If we can't (or won't) make this distinction then those who fund our activities may conclude that the difficulties that we face are actually of our own making and that's something else that the authors of the featured article just don't seem to 'get'. "Views obtained from senior medicinal chemistry leaders..." does come across as arm-waving and I'm surprised that the editor and reviewers (if there were any) let them get away with it.  

If you're familiar with the correlation inflation problem, you'll know that one of the authors of the featured article did some averaging of groups of data points prior to analysis which was presented in support of an assertion that, "Lipophilicity plays a dominant role in promoting binding to unwanted drug targets". This may indeed be the case but it is not correct to suggest that the analysis supports this opinion because the reported correlations are between promiscuity and median lipophilicity rather than lipophilicity itself. The author concedes that the analysis has been criticized but does not make any attempt to rebut the criticism. Readers can draw their own conclusions from the lack of rebuttal.

The other author of the featured article also 'contributed' to our correlation inflation study although it would be stretching it to term that contribution as 'data analysis'. The approach used there was to first bin the data and then to plot bar charts which were compared visually. You might wonder how a bar chart of binned data can be used to quantify the strength of a  trend and, if attempting to do this, keep your arms loose because you'll be waving them a lot. Here are a couple of examples of how the approach is applied:    

The clearer stepped differentiation within the bands is apparent when log DpH7.4 rather than log P is used, which reflects the considerable contribution of ionization to solubility. 

This graded bar graph (Figure 9) can be compared with that shown in Figure 6b to show an increase in resolution when considering binned SFI versus binned c log DpH7.4 alone. 

This second approach to data 'analysis' is actually more relevant than the first to this blog post because it is used as 'support' ('a crutch' might be a more appropriate term) for SFI (Solubility Forecast Index), which is the old name for PFI (Property Forecast Index) which the featured article touts as a metric.  If you're thinking that it's rather strange to 'convert' one form of continuous data (e.g. measured logD) into another form of continuous data (values of metrics) by first making it categorical and turning it into pictures, you might not be alone. What 'senior medicinal chemistry leaders' would make of such data 'analysis' is open to speculation.  

But enough of voodoo correlations and 'pictorial' data analysis because I should make some general comments on property-based design. Here's a figure that provides an admittedly abstract view of property-based design.
 
One challenge for drug-likeness advocates analyzing large, structurally heterogenous data sets is to make the results of analysis relevant to the medicinal chemists working on one or two series in a specific lead optimization project. Affinity (for association with both therapeutic target and antitargets) and free concentration at site of action are the key determinants of drug action. In general, the response of activity to lipophilicity depends on chemotype and, in the case of affinity, also on the relevant protein target (or antitarget). If you're going to tell medicinal chemists how to do their jobs then you can't really afford to have any data-analytic skeletons rattling around in the closet and that's something else that the authors of the featured article just don't 'get'.

The featured article asserts:

The principle of minimal hydrophobicity, proposed by Hansch and colleagues in 1987 states that “without convincing evidence to the contrary, drugs should be made as hydrophilic as possible without loss of efficacy.” This hypothesis is surviving the test of time and has been quantified as lipophilic ligand efficiency (LLE or LipE).

A couple of points need to be made here. Firstly, when Hansch et al refer to 'hydrophobicity', they mean octanol/water logP (as opposed to logD). Secondly, the observation that excessive lipophilicity is a bad thing doesn't actually justify using LLE/LipE in lead optimization. The principle proposed by Hansch et al suggests that a metric of the following functional form may be useful for normalization of activity with respect to liophilicity:


pIC50  -  (l  ´ logP)

However, the principle does not tell us what value of l is most appropriate (or indeed whether a single value of l is appropriate for all situations).  The 'sound and fury' article reviewed in an earlier post makes a similar error with ligand efficiency. 

So it's now time to take a look at PFI and the featured article asserts:


The likelihood of meeting multiple criteria, a typical requirement for a candidate drug, increases substantially with  ‘low fat, low flat’ molecules where PFI is <7, versus >7. In considering a portfolio of drug candidates, the probabilistic argument hypothesizes that successful outcomes will increase as the portfolio’s balance of biological and physicochemical properties becomes more similar to that of marketed drugs.

The first thing that a potential user of PFI should be asking him/herself is where this magic value of 7 comes from since the featured article does imply that the likelihood of good things will increase substantially when PFI is reduced from 7.1 to 6.9. Potential users also need to ask whether this step jump in likelihood is backed by statistical analysis of experimental data or by 'clearer stepped variation' in pictures created using an arbitrary binning scheme. It's also worth remembering that thresholds used to apply guidelines often reflect the binning schemes used to convert continuous data to categorical data and the correlation inflation Perspective discusses the 4/400 rule in this context. Something that molecular property design 'experts' really do need to 'get' is that simple yes/no guidelines are of limited use in practical lead optimization even when these are backed by competent analysis of relevant experimental data. Molecular property 'experts' also need to 'get' that measured lipophilicity is not actually a molecular property. 

PFI is defined as the sum of chromatographic logD (at pH 7.4) and the number of aromatic rings: 


PFI = Chrom logDpH7.4 +  Ar rings

Now suppose that you're a medicinal chemist in a department where the head of medicinal chemistry has decreed that that 80% of compounds synthesized by departmental personnel must have PFI less than 7.  When senior medicinal chemistry leaders set targets like these, the primary objective (i.e topic of your annual review) is to meet them. Delivering clinical candidates is secondary objective since these will surely materialize in the pipeline as if by magic provided that the compound quality targets are met. 

There is a difference between logD (whether measured  by shake-flask or chromatographically) and logP and one which it is important for compound quality advocates to 'get'. When we measure lipophilicity, we determine logD rather than logP and so it is not generally valid to invoke Hansch's principle of minimal hydrophobicity (which is based on logP) when using logD. If the compound in question is not significantly ionized under experimental conditions (pH) then logP and logD will be identical. However, this is not the case when ionization is significant as is usually the case for amines and carboxylic acids at a physiological pH like 7.4. If ionization is significant then logD will typically be lower than logP and we sometimes assume that only the neutral form of the compound partitions into the organic phase for the purposes of prediction or interpretation of log D values. If this is indeed the case we can write logD as a function of logP and the fraction of compound existing in neutral form(s):

  logD(pH) = logP + logFneut(pH)
Ionized forms can sometimes partition into the organic phase although measuring the extent to which this happens is not easy and the effective partition coefficient for a charged entity depends on whatever counter ion is present (and its concentration). 

So let's get back to the problem of reducing logD so our medicinal chemist can achieve those targets and get an A+ rating in the annual review.  Two easy ways to lower logD are to add ionizable groups (if compound is neutral) and to increase extent of ionization (if compound already has ionizable groups). Increasing the extent of ionization will generally be expected to increase aqueous solubility but I hope readers can see why we wouldn't expect this to help when a compound binds in an ionized form to an antitarget such as hERG (see here for a more complete discussion of this point).  Now I'd like you to take a close look at Figure 2(a) in the featured article. You'll notice that the profiles for the last two entries (hERG and promiscuity) have actually been generated using intrinsic PFI (iPFI) rather than PFI itself and you may be wondering what iPFI is and why it was used instead of PFI. In answer to the first question, iPFI is calculated using logP rather than logD:


 iPFI = logP +  Ar rings

This definition of iPFI is not quite complete because the authors of the featured article don't actually say what they mean by logP.  Is it actually obtained directly from experimental measurements (e..g. logD/pH profile) or is it calculated (in which case it should be stated which method was used for the calculation).

Some medicinal chemists reading this will be asking what iPFI was even doing in the article in the first place and my response would be, as I say frequently in Brazil, 'boa pergunta'.  My guess is that using PFI rather than iPFI for the hERG row of Figure 2(a) would have the effect of shifting the cells in this row one or two cells to the left (based on the assumption that logP will be 1 to 2 units greater than logD at pH 7.4).  Such a shift would make compounds with PFI less than 7 look 'dirtier' than the PFI advocates would like you to think.

There is another term in PFI and that's the number of aromatic rings (# Ar rings) which is meant to measure how 'flat' a molecular structure is.  That it might do but then again it might not because two 'flat' aromatic rings will look a lot less flat when linked by a sulfonyl group and their rigidity could prove to be a liability when trying to pack them into a crystal lattice. However, number of aromatic rings will also quantify molecular size (especially in typical Pharma compound collections) and this is something my friends at Practical Fragments have also noted. Molecular size had been recognized as a pharmaceutical risk factor for at least a decade before people started to tout PFI (or SFI) as a compound quality metric and we can legitimately ask whether or not using a more conventional measure of molecular size (e.g. molecular weight, number of non-hydrogen atoms or molecular volume) would have resulted in a more predictive (or useful) metric.

So let's assume for a moment that you're a medicinal chemist in a place where the 'senior medicinal chemistry leaders' actually believe that optimizing PFI is useful. In case you don't know, jobs for medicinal chemists don't exactly grow on trees these days and so it makes a lot of sense to adopt an appropriately genuflectory attitude to the prevailing 'wisdom' of your 'leaders'. The problem is that your 1 nM enzyme inhibitor with the encouraging pharmacokinetic profile has a PFI of 8 and your lily-livered manager is taking some flak from the Compound Respository Advisory Panel for having permitted you to make it in the first place. Fear not because, you have two benzene rings at the periphery of the molecular structure which will make the synthesis relatively easy. Basically you need to think of a metric like PFI as a Gordian knot that needs to be cut efficiently and you can do this either by eliminating rings or by eliminating aromaticity. Substitution of benzoquinone (either isomer) or cyclopentadiene for the offending benzene rings will have the desired effect.

It's been a long post and I really do need to start wrapping things up. One common reaction when you criticize of drug discovery metrics is the straw man defense is which your criticism is interpreted as an assertion that one doesn't need to worry about physicochemical properties.  In other words, this is precisely the sort of deviant behavior that MAMO (Mothers Against Molecular Obesity) have been trying to warn about. To the straw men, I will say that we described lipophilicity and molecular size as pharmaceutical risk factors in our critique of ligand efficiency metrics. In that critique, we also explain what it means to normalize activity to with respect to risk factor and that's something that not even the NRDD ligand efficiency metric review does. There's a bit more to defining a compound quality metric than dreaming up arbitrary functions of molecular size and lipophilicity and that's something else that the authors of the featured article just don't seem to 'get'. When you use PFI you're assuming that a one unit decrease in chromatographic logD is equivalent to eliminating an aromatic ring (or the aromaticity of a ring) from the molecular structure. 

The essence of my criticism of metrics is that the assumptions encoded by the metrics are rarely (if ever) justified by analysis of relevant measured data. A plot of pIC50 against the relevant property for your project compounds is a good starting point for property-based design and it allows you to use the actual trend observed in your data for normalization of activity values (see the conclusion to our ligand efficiency metric critique for a more detailed discussion of this). If you want to base your decisions on 'clearer stepped differentiation' in pictures or on the blessing of 'senior medicinal chemistry leaders', as a consenting adult, you are free to do so.

Wednesday, 6 January 2016

Looking back at 2015


I'll start the year by taking a look back at some of the 2015 blog posts. The dynamic range of the bullshitometer was severely tested last year and there was an element of 'pour encourager les autres' to more than one of the posts. I thought that it'd be fun to share some travel pics and the first is of the Danube in Belgrade (I'd dropped by to catch up with friends and deliver a harangue at the university).  The early evening light was quite perfect although I hope that I won't spoil your experience of the photo by telling you that there was a pig carcass floating about 100 m from it was taken.

 

I changed the title of the blog this year. I've not been involved with FBDD for some years now and molecular design was always my main interest. One of the ideas that I try to communicate is that there's more to design than just making predictions. After Belgrade, I dropped in at Fidelta in Zagreb where I delivered another harangue before heading south to Sarajevo.  I'm a keen student of history so it was inevitable that this would be the first photo I'd take in Sarajevo.


It seems so bizarre today. There had already been one assassination attempt for the day when the driver of the car took the fateful wrong turn that gave Gavrilo Princip the opportunity to fire two shots at the royal couple. Back in Vienna, Sophie was not always allowed out in public with Franz Ferdinand so the trip to Sarajevo may have been a special treat for her. What if SatNav had already been invented but, then again, what if Queen Victoria's eldest child had succeeded her to the throne?

Part of the problem was that, as a lowly Czech countess, Sophie was not considered an appropriate match for the Habsburg heir by Franz Josef (the reigning emperor and a puritanical old killjoy) and there were rules (although metrics and Lean Six Sigma 'belts' had, thankfully, not yet been invented). One of the rules was that the children of Sophie and Franz Ferdinand were barred from succession. It is somewhat ironic that poor Franz Ferdinand was never even supposed to be crown prince in the first place and only got the job because his cousin Rudolf had abruptly removed himself from the Habsburg line of succession a quarter of a century previously. 

All this talk of puritanical rules serves as a reminder that, before moving on, I need to point you towards a friend's blog post on roundheads (who were bigger killjoys than Franz Josef or even Lean Six Sigma 'belts') and cavaliers in drug discovery.  I really like the term 'roundhead' and I think you do have to agree that it's a lot politer than 'compound quality jackboot'. Terms like 'roundhead' and 'jackboot' are invariably associated with pain and that brings me to the next topic which is PAINS. My interest in this topic was piqued by a PAINS-shaming post at Practical Fragments and I have to thank my friends there for launching me on what has proven to be a most stimulating, although at times disturbing, line of inquiry.

My first post on PAINS examined some of the basic science and cheminformatics behind the substructural filters used. One observation that I'll make is that cheminformaticians would have done themselves rather more credit if, instead of implementing PAINS filters quite so enthusiastically, they'd first taken a more forensic look at how the filters had been derived. Singlet oxygen is an integral component of the AlphaScreen technology used in all six assays that formed the basis of the original PAINS study and the second post explored some of the consequences of this reliance on singlet oxygen. The third post was written as a 'get out of jail' card for those who need to get their use of PAINS past manuscript reviewers but, on a more serious note, it does pose some questions about how much we actually know about the behavior of PAINS compounds. The final PAINS post emphasized the need to make a clear distinction in science between what we know and what we believe. If we are unable (or unwilling) to demonstrate that we can do this in drug discovery then those who fund our work may conclude that the difficulties we face are of our own making.

There's actually a lot more to Sarajevo than dead Habsburgs and the city hosted the  1984 Winter Olympics. I took a taxi to the top of the bobsled run and walked back down to the city. Here are some photos. 




  


  
  

So I guess you're wondering where the 1984 bobsled run fits into drug discovery.  Ligand efficiency is, in essence, about slopes and intercepts and, like bobsledders, ligand efficiency advocates prefer not to think about intercepts.  I did two posts on ligand efficiency in 2015. The first post was a response to an article in which our criticism of ligand efficiency metrics was denounced as noise although, in the manner of Pravda, the article didn't actually say what the criticism was and I was left with the impression of a panicky batsman desperately trying to fend off a throat ball that had lifted sharply off just short of a length. The second post explored the link between ligand efficiency and homeopathy. 

I have described ligand efficiency as not even wrong and it also fits snugly into the voodoo thermodynamics category. Sometimes I think that if a coiled dog turd could be converted to molar energy units and scaled by coil radius then it would get adopted as a metric (which we might call 'scatological efficiency'). Voodoo thermodynamics is likely to feature more frequently in 2016 although I did manage one post on this topic in 2015. 

I took the train from Sarajevo to Mostar and the next four photos show a guy jumping, as is the local custom, off the reconstructed Stari Most into the Neretva River.


 





Now I guess you're wondering what a guy jumping off a bridge in Herzegovina has to do with molecular design and the quick answer is nothing at all. During the course of the year I jumped off a bridge of sorts (more accurately out of my applicability domain) with a post on Open Access and there'll hopefully be more of this sort of thing this year. This is probably a good point to wrap up the review of 2015 and I look forward to seeing you towards the end of the month when you'll meet the boys who cried wolf.


Thursday, 31 December 2015

The homeopathic limit of ligand efficiency

With the thermodynamic proxies staked, I can get back to salting the ligand efficiency (LE) soucouyant. In the previous post on this topic, I responded to a 'sound and fury' article which appeared to express the opinion that we should be using the metrics and not asking rude questions about their validity. One observation that one might make about an article like this is that the journal in question could be seen as trying to suppress legitimate scientific debate and I put this to their editorial staff. The response was that an article like this represents the opinion of the author and that I should consider a letter to editor if there was a grievance. I reassured them that there was no grievance whatsoever and that it actually takes a lot of the effort out of providing critical commentary on the drug discovery literature when journals serve up cannon-fodder like this. In the spirit of providing helpful and constructive feedback to the the editorial team, I did suggest that they might discuss the matter among themselves because a medicinal chemistry journal that is genuinely looking to the future needs to be seen as catalyzing rather than inhibiting debate. Now there is something else about this article that it took me a while to spot which is that it is freely available while 24 hours of access to the other editorial article in the same issue will cost you $20 in the absence of a subscription. If the author really coughed up for an APC so we don't have to pay to watch the toys getting hurled out of the pram then fair enough. If, however, the journal has waived the APC then claims that it is not attempting to stifle debate become a lot less convincing. Should we be talking about the Pravda of medicinal chemistry? Too early to say but I'll be keeping an eye open for more of this sort of thing.

Recalling complaints that our criticism of the thermodynamic basis of LE was 'complex', I'm going to try to make things even simpler than in the previous post.  They say a picture is worth a thousand words so I'm going to use a graphical method to show how LE can be assessed.  To make things really simple, we'll dispense with the pretentious energy units by using -log10(IC50/Cref ) as the measure of activity and I'll also point you towards an article that explains why you need that reference concentration (Cref) if you want to calculate a logarithm for IC50. I'll plot activity for two hypothetical compounds, one of which is a fragment with 10 heavy atoms ad the other is a more potent hit from high-throughput screening (HTS) that has 20 heavy atoms. I won't actually to say what units IC50 values are expressed in and you can think of the heavy atom axis as sliding up or down the activity axis in response to changes in the concentration unit Cref. I've done things this way to emphasize the arbitrary nature of the concentration unit in the LE context.




Take a look at the plot in the left of the figure which I've labeled as 'A'.  Can you tell which of the compounds is more ligand-efficient just by looking at this plot?  Don't worry because I can't either. 

It's actually very easy to use a plot like this to determine whether one compound is more ligand-efficient than another one. First locate the point on the vertical axis corresponding to an IC50 of 1 M. Then draw a line through this point and the point representing the activity of one of the compounds. If the point representing the activity of the other compound lies below the line then it is less ligand-efficient and vice versa. Like they say in Stuttgart, vorsprung durch metrik!

Now take a look at the plot on the right of the figure which I've labelled 'B'. I've plotted two different lines that pass through the point corresponding to the fragment hit. The red line suggests that the fragment hit is more ligand efficient than the HTS hit but the green line suggests otherwise. Unfortunately there's no way of knowing which of these lines is the 'official' LE line (with intercept corresponding to IC50 = 1 M) because I've not told you what units IC50 is expressed in. Presenting the IC50 values in this manner is not particularly helpful if you need to decide which of the two hits is 'better' but it does highlight the arbitrary manner in which the intercept is selected in order to calculate LE. It also highlights how our choice of intercept influences our perception of efficiency.  

You can also think of the intercept as a zero molecular size limit for activity. One reason for doing so is that if the correlation between activity and molecular size is sufficiently strong, you may be able to extrapolate the trend in the data to the activity axis. Would it be a good idea to be make assumptions about the intercept if the data can't tell you where it is? LE is based on an assumption that the 1 M concentration is somehow 'privileged' but, in real life, molecules don't actually give a toss about IUPAC.  You can almost hear the protein saying "IUPAC... schmupack" when the wannabe ligand announces its arrival outside the binding pocket armed with the blessing of a renowned thought-leader.  The best choice of a zero molecular size limit for activity would appear to be an interesting topic for debate. Imagine different experts each arguing noisily for his or her recommended activity level to be adopted as the One True Zero Molecular Size Limit For Activity. With apologies to Prof Tolkien and his pointy-eared friends,


One Unit to rule them all, One Unit to find them,
One Unit to bring them all and in the darkness bind them, 

If this all sounds strangely familiar, it might be because you can create an absence of a solute just as effectively by making its molecules infinitely small as you can by making the solution infinitely dilute. Put another way, LE may be a lot closer to homeopathy than many 'experts' and 'thought-leaders' would like you to believe.

So that's the end of blogging for the year at Molecular Design.  I wish all readers a happy, successful and metric-free 2016.

Thursday, 5 November 2015

The rise and fall of rational drug design

I'll be taking a look at a thought-provoking article (which got me chuckling several times) on drug design by my good friend Brooke Magnanti in this blog post.  I've known Brooke for quite a few years and I'll start this post with some photos taken by her back in 1998 on a road trip that took us from Santa Fe to Carlsbad via Roswell and then to White Sands, the Very Large Array before returning to Santa Fe.  The people in these photos (Andrew Grant, Anthony Nicholls, Roger Sayle and) also appear in Brooke's article and you can see Brooke reflected in my sunglasses. The football photos are a great way to remember Andrew who died in in his fiftieth year while running and they're also a testament to Andrew's leadership skills because I don't think anybody else could have got us playing football at noon in the gypsum desert that is White Sands. We can only guess what Noël Coward would have made of it all.


What Brooke captures in her article on rational drug design is the irrational optimism that was endemic in the pharma/biotech industry of the mid-to-late nineties and she also gives us a look inside what used to be called 'Info Mesa'.  I particularly liked, 

"The name Info Mesa may be more apt than those Wired editors realised, since the prospects of a paradigm shift in drug development rose rapidly only to flatten out just when everyone thought they were getting to the top."

However, it wasn't just happening in computation and informatics in those days and it can be argued that the emergence of high-throughput screening (HTS) had already taken some of the shine off virtual screening even before usable virtual screening tools became available. The history of technology in drug discovery can be seen as a potent cocktail of hype and hope that dulls the judgement of even the most level-headed. In the early (pre-HTS) of computational chemistry we were, as I seem to remember the founder of a long-vanished start-up saying, were not going to be future-limited. In a number of academic and industrial institutions (although thankfully not where I worked), computational chemists were going to design drugs with computers and any computational chemist who questioned the feasibility of this noble mission was simply being negative. HTS changed things a bit and, to survive, the computational chemist needed to develop cheminformatic skills.

There is another aspect to technology in the pharma/biotech industry which is not considered polite to raise (although I was sufficiently uncouth to do so in this article).  When a company spends a large amount of money to acquire a particular capability, it is in the interests of both vendor and company that the purchase is seen in the most positive light. This can result in advocates for the different technologies expending a lot of energy in trying to show that 'their' technology is more useful and valuable than the other technologies and this can lead to panacea-centric thinking by drug discovery managers (who typically prefer to be called 'leaders'). In drug discovery, the different technologies and capabilities tend to have the greatest impact when deployed in a coordinated manner. For example, the core technologies for fragment-based drug discovery are detection/quantification of weak affinity and efficient determination of structures for fragment-protein complexes. Compound management, cheminformatics and the ability to model protein-ligand complexes all help but, even when used together, these cannot substitute for FBDD's core technologies.  Despite the promises, hype and optimism twenty years ago, so vividly captured by Brooke, small molecule drug discovery is still about getting compounds into assays (and it is likely to remain that way for the foreseeable future).

This is probably a good point to say something about rational drug design. Firstly, it is not a term that I tend to use because it is tautological and we are yet to encounter 'irrational drug design'. Secondly, much of the focus of rational drug design has been identification of starting points for optimization which, by some definitions, is not actually design. I would argue that few technological developments in drug discovery have been directed at the problems of lead optimization. This is not to say that technology has failed to impact on lead optimization. For example, the developments in automation that enabled HTS also led to increased throughput of CYP inhibition assays. One indication of the extent to which technological developments have ignored the lead optimization phase of drug discovery is the almost reverential view that many have of the Rule of 5 (Ro5) almost twenty years after it was first presented.  There is some irony here because Ro5 is actually of very limited value in typical lead optimization scenarios in that it provides little or no guidance for how the characteristics of Ro5-compliant compounds can be improved. When rational drug design is used in lead optimization, the focus is almost always on affinity prediction which is only one half of the equation. The other half of that equation is is free drug concentration which is a function of dose, location in the body and time. I discussed some of the implications of this in a blog post and have suggested that it may be useful to define measures of target engagement potential when thinking about drug action.

What I wrote in that blog post four years ago would have been familiar to many chemists working in lead optimization twenty years ago and that's another way of saying that lead identification has changed a lot more in the last twenty years than has lead optimization.  Perhaps it is unfair to use the acronym SOSOF (Same Old Shit Only Faster) but I hope that you'll see what I'm driving at.  Free drug concentration is a particular problem when the drug targets are not in direct contact with blood as is the case when the target is intracellular or on the 'dark side' of the blood brain barrier. If you're wondering about the current state of the art for predicting intracellular free drug concentration, I should mention that it is not currently possible to measure this quantity for an arbitrary chemical compound in live humans. That's a good place to leave things although I should mention that live humans were not the subject of Brooke's doctoral studies...