Sunday 20 October 2024

Assessment of AI-generated chemical structures using ML

Previous << || >> Next  

In an earlier post I considered what it might mean to describe drug design as AI-based. In this post I’ll take a general look at using machine learning (ML) to predict biological activity (and other pharmaceutically-relevant properties) for AI-generated chemical structures. Whether or not ML models ultimately prove to be fit for this purpose it is worth pointing out that many visionaries and thought leaders who tout computation as a panacea for humanity’s ills fail to recognize the complexity of biology (take a look at In The Pipeline posts from 2007 | 2015 | 2024). One point worth emphasizing in connection with the complexity of biology is that it is not currently possible to measure the concentration of a drug at its site of action for intracellular targets in live humans (here's an article on intracellular and intraorgan drug concentration that I recommend to everybody working in drug discovery and chemical biology). While I won't actually be saying anything about AI (here's a recent post from In The Pipeline that takes a look at how things are going for early movers in the field of AI drug discovery) in the current post I'll reiterate the point with which I concluded the earlier post:

One error commonly made by people with an AI/ML focus is to consider drug design purely as an exercise in prediction while, in reality, drug design should be seen more in a Design of Experiments framework.  

In that earlier post I noted that there’s a bit more to drug design than simply generating novel molecular structures and suggesting how the compounds should be synthesized. While I'm certainly not denying the challenges presented by the complexity of biology the current post will focus on some of the challenges associated with assessing chemical structures churned out by generative AI. One way of doing this is to build models for predicting biological activity and other pharmaceutically relevant properties such as aqueous solubility, permeability and metabolic stability. This is something that people have been trying to do for many years and the term ‘Quantitative Structure-Activity Relationship’ (QSAR) has been in use for over half a century (the inaugural EuroQSAR conference was held in Prague in 1973 a mere five years after Czechoslovakia had been invaded by the Soviet Union, the Polish People's Republic, the People's Republic of Bulgaria, and the Hungarian People's Republic). My view is that many of the ML models that get built with drug design in mind could accurately be described as QSAR models and I would not describe QSAR models as AI.

In the current post, I'll be discussing ML models for predicting quantities such as potency, aqueous solubility and permeability that are continuous variables which I refer to as 'regression-based ML models' (while some readers will not be happy with this label I do need to make it absolutely clear that the post is about one type of ML model and the label 'QSAR-like' could also have been used). I’ll leave classification models for another post although it’s worth mentioning that genuinely categorical data are actually rare in drug discovery (you should always be wary of gratuitous categorization of continuous data since this is a popular way to disguise the weakness of trends and KM2013 will give you some tips on what to look out for). It also needs to be stressed that the ML is a very broad label and that utility in one area (prediction of protein-folding for example) doesn't mean that that ML models will necessarily prove useful in other area.      

To build a regression-based ML model you first need to assemble a training set of compounds for which the appropriate measurements have been made and pIC50 values are commonly used to quantify biological activity (I recommend reading the LR2024 study on combining results from different assays although, as discussed in this post, I don’t consider it meaningful to combine data from multiple pairs of assays when calculating correlation-based metrics for assay compatibility). Next, you calculate values of descriptors for the chemical structures of the compounds in your training set (descriptors are typically derived from the connectivity in the chemical structure although atom counts and predicted values of physicochemical properties are also used). Finally, you use the ML modelling tools to find a function of the descriptors that best predicts the biological activity (or a pharmaceutically-relevant property) for the compounds in the training set. Generally you should also validate your models and this is especially important for models with large numbers of adjustable parameters.

There appears to be a general consensus that you need plenty of data for building ML models and some will even say “quantity has a quality all of its own” (this is sometimes stated as Stalin’s view of the T-34 tank although I consider this unlikely and the T-34 was actually an excellent tank which also happened to get produced in large numbers). Most people building regression-based ML models are also aware that you need a sufficiently wide spread in the measured data used for training the model (the variance in the measured data should be large in comparison with the precision of the measurement). Lead optimization is typically done within structural series and building a regression-based ML model that is predictively useful is likely to require data that have been measured for compounds in the structural series of interest.  These data requirements are quite stringent and I see this as one reason that QSAR approaches do not appear to have had much impact on the discovery of drugs despite the drug discovery literature being awash with QSAR articles. Back in 2009 (see K2009) I compared prediction-driven drug design with hypothesis-driven drug design, noting that the former is often not viable and that the latter is more commonly used in pharmaceutical and agrochemical discovery (former colleagues discussed hypothesis-driven molecular design in the context of the design-make-test-analyse cycle in the P2012 article).

With freshly painted T-34 at Brest Fortress, Belarus (June 2017)

There are some other points that you need to pay attention to when building regression-based ML models.  First, replicate measurements for the response variable (the quantity that you’re trying to predict) should be normally distributed and this is one reason why we model pIC50 rather than IC50. Second, the data values for the training set should be uniformly distributed in the descriptor space (my view, expressed in B2009, is that many 'global' predictive models are actually ensembles of local models). Third, the descriptors should not be strongly correlated or the method used to build the regression-based ML model must be able to account for relationships between descriptors (while it’s relatively straightforward to handle linear relationships between descriptors in simple regression analysis it’s not clear how effectively this can be achieved with more sophisticated algorithms used for building regression-based ML models).

I’ve created a graphic (Figure 1) to illustrate some of the modelling difficulties that result from uneven coverage in the descriptor space and it goes without saying that reality will be way more complex. The entities that occupy this chemical space are compounds and the coordinates of a point show the values of the descriptors X1 and X2 that have been calculated from the corresponding chemical structures (the terms ‘2D structure’ and ‘molecular graph’ also used). I’ve depicted real compounds for which measured data are available as black circles and virtual compounds (for which predictions are to be made) as five-pointed stars. The clusters (color-coded but also labelled A, B and C in case any readers are colour blind) are much more clearly defined than would be the case in a real chemical space. Proximity in chemical space implies similarity between compounds and the clusters might correspond to three different structural series.

Let’s suppose that we’ve been able to build a useful local model to predict pIC50 for each cluster even though we’ve not been able to build a predictively useful global model. Under this scenario you’d have a relatively high degree of confidence in the pIC50 values predicted for the virtual compounds (depicted as five-pointed stars) that lie within the clusters and a much lower degree of confidence in the virtual compound that is indicated by the arrow. If, however, we were to ignore the structure of the data and take a purely global view then we would conclude that the virtual compound indicated by the arrow occupied a central location in this region of chemical space and that the other three virtual compounds occupied peripheral locations. Put another way, the applicability domain of the model is not a single contiguous region of chemical space and what would appear to be an interpolation by a model is actually an extrapolation. 

It is important to take account of correlations between descriptors when building prediction models. A commonly employed tactic is to perform principal component analysis (PCA) which generates a new set of orthogonal descriptors and also provides an assessment of the dimensionality of the descriptor space. There are also ways to deal with correlations between descriptors in the model building process (PLS is the best known of these and the K1999 review might also be of interest). Correlations between descriptors also complicate interpretation of ML models and my stock response to any claim that an ML model is interpretable would be to ask how relationships between descriptors had been accounted for in the modelling of the data. An excellent illustrative example (see L2012) of a correlation between descriptors is the tendency of the presence of a basic nitrogen in a chemical structure to be associated with higher values of the Fsp3 descriptor (which, as pointed out in this post, should really be referred to as the I_ALI descriptor).

Let’s take another look at Figure 1. The axes of the ellipse representing Cluster A are aligned with the axes of the figure which tells us that X1 and X2 are uncorrelated for the compounds in this cluster.  Cluster B is also represented by an ellipse although its axes are not aligned with the axes of the figure which implies a linear correlation between X1 and X2 for the compounds in this cluster (you can use PCA to create two new orthogonal descriptors by rotating the plot around an axis that is perpendicular to the X1-X2 plane). Cluster C is a bigger problem because the correlation between X1 and X2 is non-linear (the cluster is not represented as an ellipse) and it would be rather more difficult to generate two new orthogonal descriptors for the compounds in this cluster. My view is that  PCA is less meaningful when there is a lot of clustering in data sets and I would also question the value or PLS and related methods in these situations. 

Let’s consider another scenario by supposing that we’ve been unable to build a useful local model for prediction of any of the three clusters in Figure 1.  If, however, the average pIC50 values differ for each of the three clusters we can still extract some predictivity from the data by finding a function of X1 and X2 that correlates with the average pIC50 values for the clusters. This is one way that clustering of compounds in the descriptor space can trick you into thinking that a global model has a broader applicability domain than is actually the case. Under this scenario it would be very unwise to try to interpret the model or use it to make predictions for compounds that sit outside the clusters. 

This is a good point at which to wrap up my post on regression-based ML (or QSAR-like if you prefer) models for predicting biological activity and other properties relevant to drug design such as aqueous solubility, permeability and metabolic stability. There appears to be a general consensus that building these models requires a lot of data and, in my view, this means that models like these are actually of limited utility in real world drug design. The basic difficulty is that a project team with enough data for building useful regression-based ML models is likely to be at a relatively advanced stage (the medicinal chemists will already understand the structure-activity relationships and are aware of project-specific issues such as poor aqueous solubility or high turnover by metabolic enzymes). Drug discovery scientists tend to be less aware of the problems that arise from clustering of compounds in descriptor space and, in my view, this is a factor that should be considered by those seeking to assemble data sets for benchmarking (see W2024). I'll leave you with a suggestion (it was considered a terrible idea at the time) I made over twenty years ago that each predicted value should be accompanied by chemical structures and measured values for the three closest neighbours in the descriptor space of the model.


Wednesday 18 September 2024

Variability in biological activity measurements reported in the drug discovery literature

I'll open the post with a panorama from the summit of Shutlingsloe, sometimes referred to as Cheshire's Matterhorn, which at 506 m above sea level, is the third highest point in the county. When in the UK, I usually come here to mark the solstices and there's usually a good crowd here for the occasion (the winter solstice tends to be less well attended). 

  

The LR2024 study (Combining IC50 or Ki Values from Different Sources Is a Source of Significant Noise) that I’ll be discussing in this post highlights one of the issues that you’re likely to encounter should as you be using public domain databases such as ChEMBL to create datasets for building machine learning (ML) models for biological activity. The LR2024 study has already been reviewed in a Practical Fragments post (The limits of published data) and, using the same reference numbers as were used in the study,  I’ll also mention 10 (The Experimental Uncertainty of Heterogeneous Public Ki Data) and 11 (Comparability of Mixed IC50 Data – A Statistical Analysis). The variability in biological activity data highlighted by LR2024 stems in part from the fact that the term IC50 may refer to different quantities even when measurements are performed for the same target and inhibitor/ligand (the issue doesn’t entirely disappear when you use Ki values). I have two general concerns with the analysis LR2024 study. First, it is unclear whether the ChEMBL curation process captures assay conditions in sufficient detail to enable the user to establish that two IC50 values can be regarded as replicates of the same experiment (I stress that this is not a criticism of the curation process).  Second, combining data for different pairs of assays for calculation of correlation-based measures of assay compatibility can lead to correlation inflation. One minor gripe that I do have with the LR2024 study concerns the use of the term “noise” which, in my view, should only refer to variation in values measured under identical conditions.

I'll review LR2024 in the first part of the post before discussing points not covered by the study such as irreversible inhibition and assay interference (these can cause systematic differences in IC50 values to be observed for a particular combination of target and inhibitor even when the assays use the same substrate at the same concentration). There will be a follow up post covering how I would assemble data sets for building ML models for biological activity with some thoughts on assessment and curation of published biological activity data. As is usual for blog posts here at Molecular Design, quoted text is indented with my comments enclosed in square brackets in red italics.

In the Compatibility Issues section the authors state:

Looking beyond laboratory-to-laboratory variability of assays that are nominally the same, there are numerous reasons why literature results for different assays measured against the same “target” may not be comparable. These include the following:

  1. Different assay conditions: these can include different buffers, experimental pH, temperature, and duration. [Biochemical assays are usually run at human body temperature (37°C) although assay temperature is not always reported. The term 'duration' is pertinent to irreversible inhibition and one has to be very careful when comparing IC50 values for irreversible inhibitors. It's worth mentioning that a significant reduction in activity when an assay is run in the presence of detergent (see FS2006) is diagnostic of inhibition by colloidal aggregates (see McG2003). I categorized inhibition of this nature as “type 2 behaviour” in a Comment on "The Ecstasy and Agony of Assay Interference Compounds" Editorial.] 
  2. Substrate identity and concentration: these are particularly relevant for IC50 values from competition assays, where the identity and concentration of the substrate being competed with play an important role in determining the results. Ki measures the binding affinity of a ligand to an enzyme and so its values are, in principle, not sensitive to the identity or concentration of the substrate. [My view is that one would generally need to establish that IC50 values had been determined using the same substrate and same substrate concentration if interpreting variation in the IC50 values as "noise" and it's not clear that the substrate-related information needed to establish the comparability of IC50 determinations is currently stored in ChEMBL. If concentrations and Km values are known it may be practical to use the Cheng Prusoff equation (see CP1973) to combine IC50 values measured that have been measured using different concentrations of substrate (or cofactor). It's worth noting that enzyme inhibition studies are commonly run with the substrate concentration at its Km value (see Assay Guidance Manual: Basics of Enzymatic Assays for HTS NBK92007) and there is a good chance that assays against a target using a particular substrate will have been run using very similar concentrations of the substrate. It is important to be specially careful when analysing kinase IC50 data because assays are sometimes run at high ATP concentration in order to simulate intracellular conditions (see GG2021).]
  3. Different assay technologies: since typical biochemical assays do not directly measure ligand–protein binding, the idiosrasies of different assay technologies can lead to different results for the same ligand–protein pair. (7) [Significant differences in IC50 (or Ki) values measured for a particular combination of target and compound using different assay read-outs are indicative of interference and I’ll discuss this point in more detail later in the post.]
  4. Mode of action for receptors: EC50 values can correspond to agonism, antagonism, inverse agonism, etc.  [The difficulty here stems from not being able to fully characterize the activity in terms of a concentration response (for example, agonists are characterised by both affinity and efficacy).]

The situation is further complicated when working with databases like ChEMBL, which curate literature data sets:

  1. Different targets: different variants of the same parent protein are assigned the same target ID in ChEMBL [My view is that one needs to be absolutely certain that assays have been performed using identical (including with respect to post-translational modifications) targets before interpreting differences in IC50 or Ki values as noise or experimental error.] 
  2. Different assay organism or cell types: the target protein may be recombinantly expressed in different cell types (the target ID in ChEMBL is assigned based on the original source of the target), or the assays may be run using different cell types.  [There does appear to be some confusion here and it would not generally be valid to valid to assign a ChEMBL target ID to a cell-based assay.]  
  3. Any data source can contain human errors like transcription errors or reporting incorrect units. These may be present in the original publication─when the authors report the wrong units or include results from other publications with the wrong units─or introduced during the data extraction process.

The authors describe a number of metrics for quantifying compatibility of pairs of assays in the Methods section of LR2024.  My view is that compatibility between assays should be quantified in terms of differences between pIC50 (or pKi) values and I consider correlation-based metrics to be less useful for this purpose. The degree to which pIC50 values for two assays run against a target are correlated reflects the (random) noise in each assay and the range (more accurately the variance) in the pIC50 values measured for all the compounds in each assay.  Let’s consider a couple of scenarios.  First, results from two assays are highly correlated but significantly offset from each other to a consistent extent (the assays might, for example, measure IC50 for a particular target using different substrates). Under this scenario it would be valid to include results from both assays in a single analysis (for example, by using the observed offset between pIC50 values as a correction factor) even though it would not be valid to treat the pIC50 values for compounds in the two assays as equivalent. In the second scenario, the correlation between the assays is limited by the narrowness of the range in the IC50 values measured for the compounds in the two assays. Under this scenario, differences between the pIC50 values measured for each compound can still be used to assess the compatibility of the two assays even though the range in the IC50 values is too narrow for a correlation-based metric to be useful. 

The compatibility between the two assays was measured by comparing pchembl values of overlapping compounds. [The term pchembl does need to be defined.] In addition to plotting the values, a number of metrics were used to quantify the degree of compatibility between assay pairs:

  • R2: the coefficient of determination provides a direct measure of how well the “duplicate” values in the two assays agree with each other. Values range from −1.0 to 1.0 with larger values corresponding to higher compatibility. [I’ve discussed limitations of correlation-based metrics for assessment compatibility of assays in the preceding paragraph.] 
  • Kendall τ: nonparametric measure of how equivalent the rankings of the measurements in the two assays are. Values range from −1.0 to 1.0 with larger values corresponding to higher compatibility. [I’ve discussed limitations of correlation-based metrics for assessment compatibility of assays in the preceding paragraph.]
  • f > 0.3: fraction of the pairs where the difference is above the estimated experimental error. Smaller values correspond to higher compatibility. [The uncertainty in the difference between two pIC50 values is greater than the uncertainty in either pIC50 value (an uncertainty of  0.3 in ΔpIC50 would correspond to an uncertainty of 0.2 in each of the IC50 values from which the difference had been  calculated.]
  • f > 1.0: fraction of the pairs where the difference is more than one log unit. This is an arbitrary limit for a truly meaningful activity difference. Smaller values correspond to higher compatibility. [The uncertainty in the difference between two pIC50 values is greater than the uncertainty in either pIC50 value (an uncertainty of  1.0 in ΔpIC50 would correspond to an uncertainty of 0.7 in each of the IC50 values from which the difference had been calculated.]
  • κbin: Cohen’s κ calculated between the assays after binning their results into active and inactive using bin as the activity threshold. Values range from −1.0 to 1.0 with larger values corresponding to higher compatibility. [I’ve discussed limitations of correlation-based metrics for assessment compatibility of assays in the preceding paragraph. I generally advise against binning continuous data prior to assessment of correlations because the operation discards information and the values of the correlation metrics vary with the scheme used to bin the data.]
  • MCCbin: Matthew’s correlation coefficient calculated between the assays after binning their results into active and inactive using bin as the activity threshold. Values range from −1.0 to 1.0 with larger values corresponding to higher compatibility. [I’ve discussed limitations of correlation-based metrics for assessment compatibility of assays in the preceding paragraph. I generally advise against binning continuous prior to assessment of correlations because this operation discards information and the values of the correlation metrics vary with the scheme used to bin the data.]

Let’s take a look at some of the results reported in the LR2024 study and it’s interesting that f > 0.3 and f > 1.0 values were comparable for IC50 and Ki measurements. This is an important result since Ki values do not depend on the concentration and Km of the substrate (or cofactor) and I would generally anticipate greater variation in IC50 values measured for each compound-target pair than for the corresponding Ki values. 

We first looked at the variation in the data sets when IC50 assays are combined using “only activity” curation (top panels in Figure 2). The noise level in this case is very high: 64% of the Δpchembl values are greater than 0.3, and 27% are greater than 1.0. The analogous plot for the Ki data sets is shown in Figure S1 in the Supporting Information. The noise level for Ki is comparable: 67% of the Δpchembl values are greater than 0.3, and 30% are greater than 1.0.

I consider it valid to combine data for different pairs of assays for analysis of ΔpIC50 or ΔpKi values. However, I have significant concerns about the validity of combining data for different pairs of assays for analysis of correlations between pIC50 or pKi values. The authors of LR2024 state:  

In Figure 2 and all similar plots in this study, the points are plotted such that the assay on the x-axis has a higher assay_id (this is the assay key in the SQL database, not the assay ChEMBL ID that is more familiar to users of the ChEMBL web interface) in ChEMBL32 than the assay on the y-axis. Given that assay_ids are assigned sequentially in the ChEMBL database, this means that the x-value of each point is most likely from a more recent publication than the y-value. We do not believe that this fact introduces any significant bias into our analysis.

I see two problems (one minor and one major) in preparing data in this manner for plotting and analysis of correlations over a number of assay pairs. The minor problem is that exchanging assay1 with assay2 for some of the assay pairs will generally result in different values for the correlation-based metrics for compatibility of assays. While I don’t anticipate that the differences would be large the value of a correlation-based metric for assay compatibility really shouldn’t depend on the ordering of the assays. Furthermore, the issue can be resolved by symmetrizing the dataset so that each of the pair of assay results for each compound is included both as the x-value and as the y-value. Symmetrizing the dataset in this manner doubles the number of data points and one would need to be careful if estimating confidence intervals for the correlation-based metrics for assay compatibility. I think that it would be appropriate apply a weight of 0.5 to each data point for estimation of confidence intervals although I would certainly be consulting a statistician before doing this.

However, there is also another problem (which I don't consider to be minor) with combining data for assay pairs when analysing correlations. The value of a correlation-based metric for assay compatibility reflects the variance in ΔpIC50 (or ΔpKi) values and the variance in the pIC50 (or pKi) values. The variance in pIC50 (or pKi) values when different pairs of assays that have been combined would generally be expected to be greater than for the datasets corresponding to the individual assay pairs.  Under this scenario I believe that it would be accurate to describe the correlation metrics calculated for the aggregated data as inflated (see KM2013 and the comments made therein on the HMO2016 , LS2007 and LBH2009 studies) and as a reviewer of the manuscript I would have suggested that the distribution over all assay pairs be shown for each correlation-based assay compatibility metric. When considering correlations between assays it can also be helpful, although not strictly correct, to think in terms of ranges in pIC50 values. For example, the range in pIC50 values for “only activity curation” in Figure 2 appears to be about 7 log units (I’d be extremely surprised if the range in pIC50 values for any of the individual assays even approached this figure). My view is that correlation-based metrics are not meaningful when data for multiple pairs of assays have been combined although I don't think any real harm has been done given that the authors certainly weren't trying to 'talk up' strengths of trends on the basis of the values of the correlation-based metrics. However, there is a scenario under which this type of correlation inflation would be a much bigger problem and that would be when using measures of correlation to compare measured ΔG values with values that had been calculated by free energy perturbation using different reference compounds.

So far in the post the focus has been on the analysis presented in LR2024 and now I’ll change direction by discussing a couple of topics that were not covered in that study.  I’ll start by looking at irreversible mechanisms of action and the (S2017 | McW2021 | H12024) articles cover irreversible covalent inhibition (this is the irreversible mechanism of action that ChEMBL users are most likely encounter).  You need two parameters to characterize irreversible covalent inhibition (Ki and kinact respectively quantify the affinity of the ligand for target and the rate at which the non-covalently bound ligand becomes covalently bound to target). While it is common to encounter IC50 values in the literature for irreversible covalent inhibitors these are not true concentration responses because the IC50 values also depend on factors such as pre-incubation time. Another difficulty is that articles reporting IC50 values for irreversible covalent inhibitors don’t always explicitly state that the inhibition is irreversible.

As the authors of LR2024 correctly note differences between IC50 values may be the result of using different assay technologies. Interference with assay read-out (I categorized this as “type 1 behaviour” in a Comment on "The Ecstasy and Agony of Assay Interference Compounds" Editorial) should always be considered as a potential explanation for significant differences between IC50 values measured for a given combination of target and inhibitor when different assay technologies are used. An article that I recommend for learning more about this problem is SWK2009 which explains how UV/Vis absorption and fluorescence by inhibitors can cause interference with assay read-outs (the study also shows how interference can be assessed and even corrected for). When examining differences between IC50 values for the same combination of target and inhibitor it's worth bearing in mind that interference with assay read-outs tends to be more of an issue at high concentration (this is why biophysical assays tend to be favored for screening fragments). From the data analysis perspective, it’s usually safe to assume that enzyme inhibition assays using the same substrate also use the same type of assay read-out.

Differences in the technology used to prepare the solutions for assays is another potential cause of variation in IC50 values. For example, a 2010 AstraZeneca patent (US7718653B2) disclosed significant differences in IC50 values depending on whether acoustic dispensing or serial dilution was used for preparation of solutions for assay. Compounds were observed to be more potent when acoustic dispensing was used and the differences in IC50 values point to an aqueous solubility issue. The data in US7718653B2 formed the basis for the EOW2013 study.

So that brings us to the end of my review of the LR2024 study and I’ll be doing a follow up post later in the year.  One big difficulty in analysing differences between measured quantities is determining the extent to which measured values are directly comparable when IC50 can be influenced by factors such as the technology used to prepare assay solutions. Something that I think would have been worth investigating is the extent to which variability of measured values depends on potency (pIC50 measurements might be inherently more variable for less potent inhibitors than for highly potent inhibitors). The most serious criticism that I would make of LR2024 is it is not meaningful to combine data for different pairs of assays when calculating correlation-based measures of assay compatibility. 

Tuesday 30 July 2024

A Nobel for property-based drug design?

[This post was updated on 04-Aug-2024. I thank Tim Ritchie (see RM2009 | RM2014) for bringing YG2003 (Prediction of Aqueous Solubility of Organic Compounds by Topological Descriptors) to my attention.]

"The problems of ADME are precisely those that determine success or failure of a drug in vivo. In vitro data can give a clearer picture of the receptor characteristics, but knowledge and control of ADME are also vital. A common trap in binding studies is that binding generally increases with lipophilicity, so that one may obtain extremely potent binding that is totally unattainable in vivo."

SH Unger (1987) Computer-Aided Drug Design in the Year 2000. 
Drug Information Journal 21:267-275 DOI
******************************************

In this post I’ll be reviewing an Editorial (Property-Based Drug Design Merits a Nobel Prize) that was recently published in J Med Chem. For me, the Editorial raises questions about the critical thinking skills of its authors and of the judgement of the J Med Chem Editors (I’m guessing that some of the courteous and cultured members of the Nobel Prize committee might regard it to be somewhat pushy, and possibly even uncouth, for journals to be publishing nominations for Nobel Prizes as editorials). My advice to anybody nominating individuals for a Nobel Prize is to be aware of an observation, usually attributed to Jocelyn Bell Burnett, that it’s better that people ask why you didn’t win a Nobel Prize than why you did. Where applicable, I've used the the same reference numbers that were used in the Editorial and I’ll start by reproducing the Nobel Prize proposal (as is usual in posts at Molecular Design, I’ve inserted some comments, italicized in red and enclosed in square brackets, into the quoted text):
We propose that a Nobel Prize in Physiology or Medicine should be awarded for property-based drug design, with Christopher A. Lipinski, Paul D. Leeson, and Frank Lovering as the proposed recipients for their development of “important principles for drug design” [I would describe what the proposed Nobel laureates have introduced as a rule, a metric and a molecular descriptor rather than principles.], principles that have contributed to the development of numerous approved drugs. [The authors do need to provide convincing evidence to support what appear to be some wildly extravagant claims. Specifically, the authors need to demonstrate that the rule, metric and molecular descriptor (which they describe as “principles”) were actually critical to the decision-making in projects that led to the development of numerous drugs.] While drug design previously focused primarily on optimizing potency, they introduced a more holistic approach based on the consideration of how fundamental molecular and physicochemical properties affect pharmaceutical, pharmacodynamic, pharmacokinetic, and safety properties. [My view is that none of proposed Nobel laureates even demonstrated a single convincing link between molecular and physicochemical properties, and pharmaceutical, pharmacodynamic, pharmacokinetic, and safety properties.] The development of the Rof5 by Christopher A. Lipinski in 1997 introduced a new principle for how molecular and physicochemical properties affect oral bioavailability. The development of LipE by Paul D. Leeson in 2007 introduced a new principle for how physicochemical properties impact potency, selectivity, and safety. Finally, the development of Fsp3 by Frank Lovering in 2009 introduced a new principle for how molecular shape affects pharmaceutical properties and developability.

Before examining the contributions of the three nominated individuals it's worth saying something about the objectives of drug design. First, a drug needs to be highly active against its target(s). Second, activity against anti-targets should be very low (ideally too low to even be measured). Third, as I note in 34, the exposure (concentration at the site of action) of the drug needs to be controllable (one challenge in drug design is that intracellular drug concentration can’t generally be measured in vivo and I recommend that all drug discovery scientists read SR2019). I see controlling exposure as the primary focus of property-based design and one fundamental challenge is that structural modifications that lead to increased engagement potential for the therapeutic target(s) frequently result in reduced controllability of exposure as well as increased engagement potential for anti-targets. I’ve tried to capture these points in the graphic shown below.


It's generally accepted that excessive lipophilicity and molecular size are risk factors in drug design and the “compound quality” (CQ) literature abounds with fire-and-brimstone sermons on the evils of "molecular obesity" (see H2011). Nevertheless, the relationships between these descriptors and properties such as binding affinity for anti-targets, permeability, aqueous solubility and metabolic lability are generally not quite as strong as is commonly believed (or claimed). When using trends in data to inform design it’s really important to know how strong the trends are because this tells you how much weight to give to the trends when making decisions. It’s not unknown in CQ studies for trends in data to be made to appear to be stronger than they actually are which endows the CQ field with what I’ll politely call a “whiff of the pasture” (the term “correlation inflation” has been used; see KM2013). Transformation of continuous data (IC50 values) to categorical data (high | medium | low) prior to analysis should trigger a deafening cacophony of alarm bells as should any averaging of groups of continuous data values without showing the spread in the data values. Some examples of studies in which I consider the strengths of trends to have been exaggerated include 29, 35, HMO2016 and HY2010.

I think that one thing that everybody who actually works (or has worked) on drug discovery projects agrees on is that drug discovery is really difficult. My view is that, by focusing on Rof5, LipE and Fsp3, the Editorial actually trivializes the challenges faced by drug discovery scientists. Most drug design (as opposed to ligand design) takes place during lead optimization and lead optimization teams are typically addressing specific problems (for example, structural changes that result in increased potency also result in reduced aqueous solubility).  Lead optimization teams typically work with a lot of measured data (a significant component of drug design is efficient generation of data to enable decision-making) and a weak correlation between logP and aqueous solubility reported in the literature would be of no practical relevance when the lead optimization team is using aqueous solubility measurements for compounds in the structural series that they’re optimizing. It is common (see M2001 | G2008) for the simplicity of rules, guidelines and metrics to be touted and we noted in KM2013 that:   

Given that drug discovery would appear to be anything but simple, the simplicity of a drug-likeness model could actually be taken as evidence for its irrelevance to drug discovery.

Guidelines for successful drug discovery are often presented in terms of something good (or bad) being more likely to happen when the value of a calculated property such as Fsp3 exceeds a threshold. When using guidelines like these be aware that it’s actually very difficult to set these threshold values objectively and that the guidelines would have been stated in an identical manner had different threshold values been chosen to specify them. One difficulty with using guidelines like these is that the creators of the guidelines don’t usually say what they mean by “more likely” (millions of people book flights knowing that one is “more likely” to die in a plane crash if one takes a flight than if one doesn’t take a flight). A number of published guidelines (some of which have been referenced in the Editorial) claim that compounds that comply with the guidelines are more likely to be developable. However, giving weight to these claims would require that developability be defined in an objective manner that enables compounds with arbitrary molecular structures and differing biological activity to be meaningfully compared.   

I’ll examine the contributions of the three proposed laureates for the Nobel Prize in Physiology or Medicine following the order in the Editorial. Let's start with the first:
  
The development of the Rof5 by Christopher A. Lipinski in 1997 introduced a new principle for how molecular and physicochemical properties affect oral bioavailability. [As a reviewer of the manuscript I would have pressed the authors to explicitly state the new principle that their first nominee for the Nobel Prize for Physiology or Medicine had introduced 1997.]

My view is that the publication of the Rof5 (22) has certainly proven to be highly influential in that it made many drug discovery scientists aware of the need to take account of physicochemical properties, in particular lipophilicity, in drug design. What is less well-known, but possibly more important in my view, is that publication of the Rof5 sent a clear message to Pharma/Biotech management that high-throughput screening wasn’t going to be the panacea that many believed that it would be. However, I don't see the Rof5 as quite the epiphany that the authors of the Editorial would have us believe it to be. The quote with which I started this post was taken from an article that had been published ten years before 22 and the inverse nature of the relationship between aqueous solubility and lipophilicity was being discussed in the scientific literature (see YV1980) more than forty years ago. The NC1996 study is also worthy of mention because it was published more than a year before 22 and it makes the important point that optimal logP values are likely to vary with chemotype ("each congeneric series for a drug backbone usually demonstrates its own optimal log P").       

Questions can be raised about the data analysis presented in support of the Rof5 and readers may find it helpful to take a look at the S2019 study as well as my comments on the Rof5 in HBD3 and in this post. I would argue that the Rof5 does not have any practical value as a drug design tool and I would challenge the assertion made in the Editorial that the publication of 22 demonstrated how “molecular and physicochemical properties affect oral bioavailability”. One aspect of the analysis presented (22) in support of the Rof5 that isn't always fully appreciated is that the compounds for which the descriptors are calculated were all treated as having equivalent oral bioavailability (compounds were selected for the analysis on the basis of having been taken into phase 2 clinical trials at some point before the Rof5 had been published in 1997). This is one reason that it’s not credible to assert that the analysis demonstrates that these molecular and physicochemical properties are linked to bioavailability (it must be stressed that, like many, I do actually believe that excessive lipophilicity and molecular size are risk factors in drug design). I make the following point in a blog post (I’ve modified the original text very slightly for consistency with the Editorial):

The Rof5 is stated in terms of likelihood of poor absorption or permeation although no measured oral absorption or permeability data are given in 22 and the Rof5 should therefore be regarded as a statement of belief. I realise that to make such an assertion runs the risk of an appointment with the auto-da-fé and I stress that had the Rof5 been stated in terms of physicochemical and molecular property distributions I would not have made the assertion.

To see what I was getting at let’s take a look at how the Rof5 was stated in 22 (“The ‘rule of 5’ states that: poor absorption or permeation are more likely when…”). However, the analysis presented in support of the Rof5 was of the distribution of compounds in chemical space defined by molecular weight, logP and numbers of hydrogen bond donors and acceptors with no account being taken of variation in either absorption or permeation for the compounds. Analysis like this can be informative but you need to demonstrate that the chemical space is actually relevant to the phenomena of interest. One way that you can demonstrate that a chemical space is relevant is to build predictive models for the phenomena of interest using only the dimensions of the chemical space as descriptors. Alternatively you might observe meaningful differences between the distributions in the chemical space for compounds that have respectively passed and failed at at a particular stage in clinical development.  

So that’s all that I’ll be saying about Rof5 and it’s time to take a look at the contributions of the second proposed Nobel Laureate:

The development of LipE by Paul D. Leeson in 2007 introduced a new principle for how physicochemical properties impact potency, selectivity, and safety. [As a reviewer of the manuscript I would have pressed the authors to explicitly state the new principle that their second nominee for the Nobel Prize for Physiology or Medicine had introduced 2007.]

I'll start by saying that LipE is a simple mathematical formula and I suggest that one shouldn't be confusing simple mathematical formulae with principles when nominating people for Nobel Prizes. There are, however, other errors and these are not the kind of errors that you can afford to make when nominating people for Nobel Prizes. First, the term used in 29 is actually “ligand-lipophilicity efficiency” (LLE) although this appears to have mutated to “lipophilic ligand efficiency” (also LLE) by 2014 (see H2014). The term “LipE” was actually introduced by Pfizer scientists (see R2009) and it is significant that the more recent J2018 article defines LipE in terms of logD rather than logP (doing so means that you can make compounds more efficient simply by increasing extent of ionization and, as a drug design tactic, this is likely to end about as well as things did for the Sixth Army at Stalingrad).

The second (and more serious from the perspective of a Nobel nomination) error is that the metric had already been discussed, although not named, in the literature when 29 was published (I’m guessing that a suggestion that naming a metric merits a Nobel Prize for Physiology or Medicine might cause some members of the Nobel Prize committee to choke on their surströmming).  The L2006 book chapter, published fifteen months before 29, states:

Thus, to achieve compounds with a not too high log P while still retaining potency, the difference between the log potency and the log D can be utilised.

From the A2007 perspective which was published three months before 29

Lipophilicity is thought to be a driving force for binding to anti-targets such as the hERG ion channel and cytochrome p450 enzymes and potency can be scaled by lipophilicity by subtracting measured or calculated 1-octanol water partition coefficients from pIC50.

It might be helpful to say something about efficiency metrics since LiPE (or LLE if you prefer) is an example of an efficiency metric. The idea behind efficiency metrics is to “normalize” a compound’s activity (typically quantified by potency or affinity) by the value of a risk factor such as lipophilicity or molecular size (for the masochists among you there’s an entire section in 34 on normalization of binding affinity). Ligand efficiency (LE) was introduced in 2004 (see H2004) and is generally regarded as the original efficiency metric although its creators do acknowledge the influence of the K1999 study. I’ve argued at length in 34 (Table 1 and Figure 1 in the article capture the essence of the argument) that LE is physically meaningless because perception of efficiency changes if you use a different concentration to define the standard state (by convention ΔGbinding values correspond to an arbitrary 1 M standard concentration) and there is no way to objectively select any particular value of the standard concentration for calculation of LE.  The problem doesn’t go away if you try to define ligand efficiency in terms of logarithmically expressed values of IC50, Ki or Kd instead of ΔGbinding because these quantities still have to be divided by an arbitrary concentration value in order to be expressed as logarithms (see M2011).  My view is that LE shouldn't even be described as a metric and I sometimes appropriate a quote ("it's not even wrong") that is usually attributed to Pauli because those who advocate the use of LE in drug design are unable (or unwilling) to say what it measures.

The meaninglessness of LE stems from it being defined by scaling ΔGbinding by the design risk factor (molecular size). In contrast, LipE is defined by offsetting pIC50 by the risk factor (logP) and can be interpreted (see 34) as the energetic cost of moving the ligand from octanol to its target binding site (this interpretation is only valid when the ligand binds in its neutral form and is predominantly neutral in the aqueous phase).  When considering lipophilicity in property-based design it is important to be aware that octanol is an arbitrary choice of solvent for measurement of partition coefficients and that the logP (or logD) calculated for a compound may differ significantly depending on the algorithm used for the calculations. That said, the hydrogen bond donors/acceptors and ionizable groups tend to be relatively conserved within structural series which means that the details of exactly how lipophilicity is quantified are likely to be less critical in lead optimization than for structurally-diverse sets of compounds.

When we use LipE we’re actually assuming that logP (or logD) is predictive of properties such as aqueous solubility, affinity for anti-targets and metabolic lability. That is why it’s not accurate to state that the introduction of LipE showed how “physicochemical properties impact potency, selectivity, and safety”.  In some published studies the focus is less on the LipE metric and more on what might be called the "lipophilic efficiency concept" (aim for top left corner of a plot of potency against lipophilicity). It is common to show reference lines of constant LipE to plots of potency against lipophilicity in this type of analysis and if you're doing this you really should be citing R2009 rather than 29

I'll finish the commentary on LipE (or LLE if you prefer) with this statement made in the Editorial:

Emerging from an analysis of approved drugs, this rubric predicts a compound is more likely to be clinically developable when LipE > 5. [I don’t know what the authors of the Editorial mean by “rubric” (I'm not even sure that they do) but as a reviewer of the manuscript I would have pressed them to justify their claim. Specifically I would have been looking for a literature reference (for me, the choice of the word “emerging” does rather conjure up an image of hot gases and stoned priestesses at Delphi) and a coherent explanation for why a value of 5 yields a better rubric than values of 4 or 6.]

That’s all that I’ll be saying about LipE (or LLE if you prefer) and it’s time to take a look at the contributions of the third nominee for the Nobel Prize in Physiology or Medicine:

Finally, the development of Fsp3 by Frank Lovering in 2009 introduced a new principle for how molecular shape affects pharmaceutical properties and developability. [As a reviewer of the manuscript I would have pressed the authors to explicitly state the new principle that their third nominee for the Nobel Prize for Physiology or Medicine had introduced in 2009. My view is that Fsp3 is a thoroughly unconvincing descriptor of molecular shape and I suggest readers consider the suggestion that cyclohexane (Fsp3 = 1) would have a better shape match with benzene (Fsp3 = 0) than with either methane (Fsp3 = 1) or adamantane (Fsp3 = 1).]

[04-Aug-2024 update: The Fsp3 descriptor had actually been used as i_ali in the YG2003 study (Prediction of Aqueous Solubility of Organic Compounds by Topological Descriptors) six years before the publication of 35:

The aliphatic indicator of a molecule (i_ali) is equal to the number of sp3 carbons divided by the total number of carbon atoms in the molecule.

The YG2003 study discussed prediction of aqueous solubility using i_ali (renamed as Fsp3 in 35) in conjunction with other topological descriptors. In contrast with the claims made in 35 for Fsp3 the YG2003 study made no suggestion that i_ali was a highly effective predictor of aqueous solvation when used by itself.]   

Before discussing the contributions of the third nominee for the Nobel Prize for Physiology or Medicine I should stress that I certainly consider gratuitous use of aromatic rings to be a very bad thing in drug design (it was the data analysis in 35 that was criticized in KM2013 but not the eminently sensible suggestion that drug designers should look beyond what the authors referred to as ‘Flatland’). Having sp3 carbon atoms in a scaffold provides drug designers with a wider range of options for placement of substituents than would be the case for a fully aromatic scaffold and we stated in KM2013 that:   

One limitation of aromatic rings as components of drug molecules is that some regions above and below the plane defined by the atomic nuclear positions are not directly accessible to substituents. Molecular recognition considerations suggest a focus on achieving axial substitution in saturated rings with minimal steric footprint, for example by exploiting the anomeric effect or by substituting N-acylated cyclic amines at C2. 

My view is that deleterious effects of aromatic rings on aqueous solubility would be more plausibly explained by molecular interactions stabilizing the solid state than in terms of molecular shape (this point is discussed in more detail in HBD3). I also see saturated ring systems such as bicyclo[1.1.1]pentane and cubane as potentially more resistant to metabolism than benzene. 

There’s one point that I need to make before discussing 35 from the data analysis perspective which is that molecular structures with basic nitrogen atoms tend to have higher Fsp3 values than molecular structures that lack basic nitrogen atoms (see L2013). This means that you can’t tell whether the benefits of higher Fsp3 values are actually caused by the higher Fsp3 values or by the presence of basic nitrogen atoms.

The Editorial states:

Stemming from an analysis of discovery compounds, investigational drugs, and approved drugs, Fsp3 predicts a discovery compound is more likely to become a drug when Fsp3 > 0.40. 

It’s not clear (at least to me) where the figure of 0.40 comes from and I would argue that that compound X (IC50 against therapeutic target = 50 μM; Fsp3 = 0.80) would actually be less likely to become a drug than compound Y (IC50 against therapeutic target = 10 nM; Fsp3 = 0.20). I’m assuming that what the Editorial refers to as “analysis of discovery compounds, investigational drugs, and approved drugs” is what is shown by Figure 3 in 35. Presenting data in this manner hides the variation in Fsp3 for the compounds at each stage of development and makes the trends look much stronger than they actually are (this is verboten according to current J Med Chem author guidelines). I would challenge the suggestion that what is shown in Figure 3 in 35 can be used to calculate the probability that an arbitrary compound will become a drug (my view is that it’s not feasible to even define the probability that a compound will become a drug in a meaningful manner). Analyses of success in clinical development are generally more convincing when comparisons are made between compounds that pass or fail in individual phases of clinical development than between compounds in different phases of clinical development. 

The Editorial continues:        

This observation was ascribed to increased Fsp3 leading to increased aqueous solubility, a critical physiochemical property for successful drug discovery.

I’m assuming that what the Editorial refers to as “increased Fsp3 leading to increased aqueous solubility” is the trend shown by Figure 5 of 35 (this featured prominently in the KM2013 correlation inflation article) which claims to show the relationship between Fsp3 and log S (aqueous solubility expressed as a logarithm).  This claim is not accurate because the log S values have been binned and the relationship is actually between centre point of bin and mean log S value for bin. The authors of 35 used public domain aqueous solubility data for their analysis and we showed (KM2013; see Figure 5) that the Pearson correlation coefficient for the relationship between log S and Fsp3 is only 0.25 (the corresponding value for the binned data is 0.97).  I consider the suggestion that such a weak correlation could have any relevance whatsoever to the the likelihood of success in clinical trials to be wild and uninformed conjecture.      

I'll finish my commentary on Fsp3 by reproducing this claim made in the Editorial:

Much like the Rof5 and LipE, Fsp3 has proven to be enduringly useful for the design of compounds with improved chances of clinical success. (37) [My view is there is insufficient evidence to justify this claim and I'm perplexed by the citation of 37. In any case, members of the Nobel committee are likely to focus more on whether or not Fsp3 is usefully predictive than on the endurance of this molecular descriptor.]  

It’s now time to summarise what has been a long and at times pedantic blog post, and I thank all readers who’ve stayed with me. I don’t consider any of the three studies (22 | 29 | 35) that form the basis of the Nobel Prize nomination to have reported significant scientific discoveries and I would also challenge the claim made in the Editorial that these studies introduced new principles. I’m aware that 22 is heavily cited and I certainly agree that it is common to see values of LipE and Fsp3 quoted in the drug discovery literature. Nevertheless, I would argue that that the Editorial failed to provide even a single convincing example of the Rof5, LipE or Fsp3 making a critical contribution to the discovery of a marketed drug (this should be quite sufficient to rule out the award of a share in the Nobel Prize for Physiology or Medicine to any of these nominees). Furthermore, the Editorial doesn’t provide any convincing evidence that the Rof5, LipE or Fsp3 are usefully predictive in drug discovery projects.

Aside from the failure of the Editorial to demonstrate significant impact for the Rof5, LipE and Fsp3, I do have some scientific concerns about this Nobel Prize nomination. First, the Rof5 is not actually supported by data. Second, LipE had already been discussed, although not named, in the drug discovery literature when 29 was published. Third, Fsp3 had been used previously (as i_ali) for aqueous solubility prediction and the data analysis in 35 would fail to comply with current J Med Chem author guidelines.

Monday 20 May 2024

A time and place for Nature in drug discovery?

I’ll be reviewing Y2022 (The Time and Place for Nature in Drug Discovery) in this post and stating my position on natural products in modern drug discovery is a good place to start. I certainly see value in screening natural products and natural product-like compounds (especially in phenotypic assays) and there is currently a great deal of interest in chemical probes (I’ll point you toward an article on the Target 2035 initiative and a link to the Chemical Probes Portal).  In general, a natural product or natural product-like active identified by screening would either need to exhibit novel phenotypic effects or be significantly more potent than other known actives for me to enthusiastic about following it up. I would certainly consider screening fragments that are only present in natural product structures although these would need to still need comply with the criteria (typically defined in terms of properties such as molecular size, molecular complexity and lipophilicity) used to select fragments. I see significant benefits coming from the increased use of  biocatalysis, both in drug discovery and for manufacturing drugs, but I don’t see these benefits as being restricted to synthesis of natural products or natural product-like compounds. 

This will be a very long post (for which I make no apology) and it's a good point to say something about how the review is presented. I've used section headings (in bold text) used in Y2022 for my commentary and quoted text has been indented (my comments on the quoted text enclosed with square brackets and italicized in red). I'd like to raise four general points before starting my review: 

  1. Proprietary data cannot accurately be described as “facts” or “evidence” and it’s not valid to claim that you’ve proven or demonstrated something on the basis of analysis of proprietary data.  
  2. If continuous data such as oral bioavailability measurements have been made categorical (e.g., high | medium | low) prior to analysis then it’s generally a safe assumption that any trends "revealed" by the analysis are weak.  
  3. If basing claims on analysis of locations or distributions within a particular chemical space it is necessary to demonstrate the chemical space is actually relevant to the claims being made. One way to do this is to build usefully predictive models of relevant quantities such as aqueous solubility or permeability using only the dimensions of the chemical space as descriptors.  
  4. There are generally many ways to partition a region of chemical space into subregions with different average values for a measured quantity. Although the  boundaries resulting from these analyses typically appear to be well-defined (for example, as a line or curve in a 2-dimensional chemical space) it is a serious error to automatically interpret such boundaries as meaningful from a physicochemical perspective.    

I have a number of concerns about the Y2022 article and I’ll focus on the more serious of these in this post. I’ll also be commenting on the Rule of 5 (Ro5; see L1997), logP/logD differences, and the drug discovery “sweet spot” reported in the HK2012 article.  My view is that a number of the assertions and recommendations made by the authors of Y2022 are not supported by the analyses or the data that they’ve presented. Specifically, the authors present results of analyses that had been performed using proprietary and undocumented models and, in my view, they have grossly over-interpreted the predictions made using the models.  At times, the authors appear to be treating natural products as if these occupy a distinct and contiguous region of chemical space (this is a pitfall into which drug-likeness advocates also frequently stumble).  The authors of Y2022 discuss physicochemical properties at considerable length without making any convincing connection between this discussion and natural products. Reading the Y2022 article, I did detect a subliminal message that natural products might be infused with vital force and wouldn’t have been surprised to see Gwyneth Paltrow as a co-author.

I’ll make some general observations before examining Y2022 in detail. If you’re going to base decisions on trends in data then you need to now how strong the trends are because this tells you how much weight to give to the trends when making your decisions. In what I’ll call the ‘compound quality’ field you’ll often encounter data presentations that make it extremely difficult to see how strong (or weak) the trends in the data actually are (see KM2013: Inflation of correlation in the pursuit of drug-likeness). Since Ro5 was introduced in 1997 (see L1997) there has been a free flow of advice from self-appointed compound quality gurus as to how compounds can be made better, more developable and more beautiful (introduction of the term “Ro5 envy” in KM2013 appeared to cause some to spit feathers). This advice frequently comes in the form of dire warnings that exceeding a threshold value of a property, such as molecular weight or predicted octanol/water partition coefficient, will increase the probability of something bad happening. It’s actually very difficult to set thresholds like these objectively and you have to consider the possibility that some of these statements of probability are merely expressions of belief (to some “there is a high probability that God exists” will sound rather more convincing than “I believe in God”).

The graphical abstract is a good place to start my review of Y2022. I don’t know whether biotransformations exist that would convert the Core Scaffold into compounds that would match the Bios Collection generalized structure but a 1,3-diene in conjugation with a tertiary nitrogen is not the sort of substructure that I would want to see in a screening active that I had been charged with optimizing.  

Abstract

The authors of Y2022 state:  

The declining natural product-likeness of licensed drugs and the consequent physicochemical implications of this trend in the context of current practices are noted. [The authors do not make a convincing connection between natural product-likeness and physicochemical properties.]  To arrest these trends, the logic of seeking new bioactive agents with enhanced natural mimicry is considered; notably that molecules constructed by proteins (enzymes) are more likely to interact with other proteins (e.g., targets and transporters), a notion validated by natural products. [I consider this claim to be extravagant and it does need to be supported by evidence. The authors’ use of “validated” reminded me of the extravagant claim made in a Future Medicinal Chemistry editorial that “ligand efficiency validated fragment-based design”. Taking the statement literally, the authors appear to be suggesting that a compound would be more likely to interact with proteins if it had been isolated from natural sources than if it had been synthesized in a laboratory (I was reminded of the "water memory" explanation for why homeopathy works). If “molecules constructed by proteins” really are more likely to interact with other proteins then they’re also more likely to interact with anti-targets like hERG and CYPs. I’m guessing that the response of medicinal chemistry teams tackling CNS targets to suggestions that they should make their compounds more like natural products so as increase the likelihood of recognition by transporters might be to ask which natural products those offering the advice had been smoking.]

Introduction

The authors show time-dependence for the values of a number of parameters calculated for drugs in Figure 1. I see analyses like these as exercises in philately and, when I first encountered examples about two decades ago, I formed a view that some senior medicinal chemists had a bit too much time on their hands. The observation of significant time-dependency for a parameter calculated for drugs can mean one of three things. First, the parameter is irrelevant to drug discovery (however, the absence of a time-dependence shouldn't be taken as evidence that the parameter is relevant to drug discovery). Second, the old ways were best and the medicinal chemists of today have lost their way (I’m guessing this might be Jacob Rees Mogg’s interpretation if he were a medicinal chemist). Third, the old ways no longer work so well and the medicinal chemists of today have learned new ways.

I have a number of concerns about what is shown in Figure 1 (quite aside from these concerns I would question why 1b or 1c were even included in the study). The data values that have been plotted are actually mean values and, as we observed in KM2013, the presentation of mean value (or median) values without showing measures of the spread in the data, such as standard deviation or inter-quartile range, makes trends look stronger than they actually are (others use the term “voodoo correlations”).  This way of presenting data is specifically verboten by J Med Chem and Author Guidelines (viewed 18-May-2024) for that journal specifically state:

If average values are reported from computational analysis, their variance must be documented. This can be accomplished by providing the number of times calculations have been repeated, mean values, and standard deviations (or standard errors). Alternatively, median values and percentile ranges can be provided. Data might also be summarized in scatter plots or box plots.

However, the hidden variation in the response variables is not the only issue that I have with Figure 1. Let’s take a look at Figure 1a which shows “a temporal comparison of natural product likeness of approved drugs assessed by the Natural Product Scout algorithm (12) versus the year of the first disclosure of the drug” although it the caption for Figure 1a is “Natural product class probability. (8)”. I think that the authors do need to explain exactly what they mean by natural product class probability because the true probability that a compound is a natural product is either 1 (it’s a natural product) or 0 (it’s not a natural product). Put another way there are differences between natural products and Prof.  Schrödinger’s unfortunate feline companion. The measure of lipophilicity shown in Figure 1c is XLogP3 although no justification is given for the selection of this particular method for lipophilicity prediction nor is any reference provided.

Before continuing with my review of Y2022 I also need to examine Ro5 and discuss the difference between logP and logD (the reasons for these digressions will hopefully become clear later). Ro5 which was based on physicochemical property distributions for compounds that had been taken into phase 2 of clinical development before 1997 (the year that L1997 was published). My view is that Ro5 certainly raised awareness of the problems associated with excessive lipophilicity and molecular size (A Good Thing) but I’ve never considered Ro5 to be useful in design. Although Ro5 is accepted by many (most?) drug discovery scientists as an article of faith, some are prepared to ask awkward questions and I’ll mention the S2019 study. Let’s take a look at how Ro5 was specified in the L1997 article (the graphic is slide #17 from a presentation that I gave late last year):


Ro5 is stated in terms of likelihood of poor absorption or permeation although no measured oral absorption or permeability data are given in the L1997 study and Ro5 should therefore be regarded as a statement of belief. I realise that to make such an assertion runs the risk of an appointment with the auto-da-fé and I stress that had Ro5 been stated in terms of physicochemical and molecular property distributions I would not have made the assertion.

Medieval cartographers annotated the unknown regions of their maps with “here be dragons” and Ro5’s dragons are poor absorption and poor permeation. However, there's another issue which I touched on in HBD3:

It is significant that attempts to build global models for permeability and solubility, using only the dimensions of the chemical space in which the Ro5 is specified as descriptors, do not appear to have been successful.

What I was getting at in HBD3 is that the chemical space in which Ro5 is specified was not demonstrated to be relevant to permeability or solubility (this relates to the third of the four points that I raised at the start of the post). It must be stressed that I'm definitely not denying that relationships exist between descriptors, such as logP, used to specify Ro5 and properties such as aqueous solubility and permeability that are more directly relevant to getting drugs to where they need to. It’s just that these relationships are weak (see TY2020) and, while we don’t exactly know exactly how weak the relationships are, we do know that they are weak because continuous data have been binned to display them (see also KM2013 and specifically the comments on HY2010). I would generally anticipate that these relationships will be stronger within structural series but in these cases you’ll generally observe different relationships for different structural series. In practical terms this means that a logP of 5 might be manageable in one structural series while in another structural series compounds with logP greater than 3.5 prove to be inadequately soluble. As I advised in NoLE:

Drug designers should not automatically assume that conclusions drawn from analysis of large, structurally-diverse data sets are necessarily relevant to the specific drug design projects on which they are working.

I also need to discuss the distinction between logP and logD since this is a source of confusion for medicinal chemists and compound quality 'experts' alike. Here’s a graphic (it’s slide #18) from the presentation that I did at SancaMedChem in 2019 (if the piranhas did venture into the non-polar phase they'd probably end up swimming backstroke):


The partition coefficient (P) is simply the ratio of the concentration of the neutral form of the compound in the organic phase (usually octanol) to the concentration of the compound in water when both phases are in equilibrium. The distribution coefficient (D) is defined analogously as the ratio of the sum of concentrations of all forms of the compound in the organic phase to the sum of concentrations of all forms of the compound in water. Values of P and D are usually quoted as their logarithms logP and logD. When interpreting logD values it is commonly assumed that that is that only neutral forms of compounds partition into organic phases and if we make this assumption the relationship between logD and logP is given by Eqn 1 (see B2017):

When we perform experiments to quantify lipophilicity it is actually logD that is measured. Values of logP and logD are identical when ionization can be neglected and logP values for ionizable compounds can be obtained by examination of measured logD-pH profiles although this is rarely done. It’s usually a safe assumption that logP values used by drug discovery scientists (and quoted in medicinal chemistry publications) have been predicted and these values vary with the method used for prediction of logP. For example, L1997 states that the upper logP limit for Ro5 is 5 when logP is calculated using the ClogP method (see L1993) but 4.15 when logP is calculated using the method of Moriguchi et al. (see M1992). Values of logD that you encounter in the literature may have been calculated or measured (you might need to dig around to see if you’re dealing with real data) and it’s also important to remember that logD depends on pH. I would argue that logD is less appropriate than logP for defining compound quality metrics because excessive lipophilicity can be countered simply by increasing the extent to which compounds are are ionized (I hope you can see why that would be A Bad Thing). Another way to think about this is to consider an amine with a pKa value of 8 bound to hERG at a pH of 7. Now suppose that you can change the pKa of the amine to 11 without changing anything else in the molecular structure. What effects would you expect this pKa change to have on affinity, on logD and on logP?

I’ll now get back to reviewing Y2022 and let’s take a look at Figure 2 which shows an adapted version of the "drug discovery sweet spot” proposed in the HK2012 study. As with Figure 1b and 1c, I would question why Figure 2 was included in the Y2022 study since the connection with natural products is tenuous. In my view the authors of the HK2012 study made a number of serious errors in their definition of the “sweet spot” and these errors have been reproduced in the Y2022 study. The authors of HK2012 claimed to have identified a “drug discovery sweet spot” in a chemical space defined by “Log P” and “Molecular mass” but they didn’t actually demonstrate that this chemical space is actually relevant to drug discovery (one way to demonstrate relevance is to build convincing global models for prediction of properties like permeability and aqueous solubility using only the dimensions of the chemical space as descriptors).

If claiming to have identified a drug discovery “sweet spot” it’s important that each dimension of the chemical space in which the “sweet spot” corresponds to a single entity. While “Molecular mass” is unambiguous the term “Log P” does not refer to the same entity for each of the data sets from which the “sweet spot” has been derived. As noted previously ClogP (see L1993) was used to specify Ro5 while the Gleeson upper Log P limit (see G2008) and the “μM potency Log P” (see G2011) were specified respectively by values of clogP (calculated logP from ACD) and AlogP (no reference provided). In contrast the Pfizer Golden Triangle (see J2009) is specified using elogD (proprietary logD prediction method for which details were ot provided).  The Waring low and high logP/logD values stated in W2010 are at least partly based on analysis of AZlogD7.4 values (proprietary logD prediction method; details not provided) reported in the WJ2007 and W2009 studies. The W2010 study states that “the optimal range of lipophilicity lies between ~ 1 and 3” but the these are not the values that are depicted in Figure 3 (or indeed in the original HK2012 study). The Gleeson upper limits for Log P and Molecular Mass stated in G2008 reflect the arbitrary schemes used to bin the data and should not be regarded as objectively-determined limits for these quantities. The authors of Y2022 have superimposed ellipses for "SHMs", "Antibiotic Space?" and "bRo5 /  AbbVie MPS space for higher MW" on the HK2012 "sweet spot" in the creation of Figure 2 although it is not clear how these ellipses were constructed.

The Physicochemical Characteristics of Drugs

The authors assert:

A principle advocated by Hansch that drug molecules should be made as hydrophilic as possible without loss of efficacy (47) is commonly expressed and utilized as Lipophilic Ligand Efficiency (LLE). (48) [If actually using this principle advocated by Hansch you would optimize leads by varying hydrophilicity and observing efficacy. While LLE is one way to express Hansch’s principle it is by no means the only way and (pIC50 – 0.5 ´ logP) would be equally acceptable as a lipophilic ligand efficiency metric from the perspective of the Hansch’s principle.] This metric, widely accepted and exploited in drug discovery as a key metric in optimization, is expressed on a log scale as activity (e.g., −log10[XC50]) [The logarithm function is not defined for dimensioned quantities such as XC50 (see M2011) and, while it may appear to nitpicking to point it out, this is the source of the invalidity of the ligand efficiency metric as was discussed at length in NoLE.] minus a lipophilicity term (typically the Partition coefficient or log10 P or sometimes log D7.4). (49) [Although it is common to see LLE values quoted in the drug discovery literature it’s much less clear how (or even whether) the metric was actually used to make project decisions. In many studies, however, the focus is on plots of pIC50 against logP (or logD) rather than values of the metric itself. In lead optimization, medicinal chemists typically need to balance activity against properties such as permeability, aqueous solubility, metabolic stability and off-target activity. In these situations, experienced medicinal chemists typically give much more weight to structure-activity relationships (SARs) and structure-property relationships (SPRs) that they've observed within the structural series that they're optimising than to crude metrics of questionable relevance and predictivity. It is noteworthy that the authors of ref 49 use logD rather than logP to define LLE (which they call LiPE) and if you do this then you can make compounds more efficient simply by increasing the extent to which they are ionized.] The impact of lipophilicity on efficacy needs to be considered in the context that reducing lipophilicity (equating to increasing hydrophilicity) will generally increase the solubility, reduce the metabolism, and reduce the promiscuity of a given compound in a series. (50) [The relationships between these properties and lipophilicity shown in ref 50 are for structurally diverse data sets rather than for individual series. I consider the activity criterion (pIC50 > 5) used to quantify promiscuity in ref 50 to be at least an order of magnitude too permissive to be pharmaceutically relevant.]

Let’s take a look at Figure 3 in which values of “Calc Chrom Log D7.4” are plotted against “CMR”. This is what the authors of say about Figure 3 in the text of Y2022:

The distribution of marketed oral drugs in terms of their lipophilicity and size, shows a remarkably similar distribution to the set of compounds designed by Kell as a representative set of natural products to investigate carrier mechanisms (Figure 3). (64) [To state “shows a remarkably similar distribution” is arm-waving given that there are methods for assessing the similarity of two distributions in an objective manner.]

As is the case for Figure 1a, what is written in the text about Figure 3 differs significantly from the caption for this figure:

Figure 3. Natural products are found across most size lipophilicity combinations, as exemplified in a representative set designed and compiled by O’Hagan and Kell (64) superimposed on the Chrom log D7.4 vs cmr training set of compounds with >30% bioavailability. (51) [It is unclear why this training set was restricted to compounds with >30% bioavailability.  The LDF is shown in this figure with “Limits of confidence” but the level of confidence to which these limits correspond is not given.]

The first criticism that I’ll make is that the authors of Y2022 have not actually demonstrated the relevance of chemical space specified by the axes of Figure 3 (this is the essence of the third of the four points that I raised at the start of the post and the same criticism can be made of Figure 4 and Figure 5). The authors note, with some arm-waving, that cmr “largely correlates with MW” which does rather beg the question of why they consider this particular measure of molecular size to be superior to MW for this type of analysis. The authors claim that “the GSK model based on log D7.4 vs calculated molar refraction” (it is actually molar refractivity as opposed to molar refraction that was calculated) is a useful guide to predict oral exposure. I consider this claim to be extravagant because one would need to have access to the proprietary model for calculation of Chrom Log D7.4 in order to use the model. The proprietary nature of the GSK model means that predictions made using this model cannot credibly be presented as “evidence”.

Details of the models for calculating Chrom Log D7.4 and for prediction of oral exposure are sketchy and I regard each of these proprietary models as undocumented. A linear discriminant function (LDF) model was reportedly used for prediction of oral exposure but it is unclear how the model was trained (or if it was even validated). An LDF is a classification model and it is not clear what how the classes were defined for prediction of oral exposure. I’m assuming that the oral absorption classes used in GSK oral exposure model have been defined by categorization of continuous data (I’m happy to be corrected on this point but, given the sketchiness of details, I can be forgiven for speculation) and setting thresholds like these is difficult to achieve in an objective manner. If this was indeed the case I'd assume that the threshold value used to categorize the continuous data was arbitrary (you’ll get a different LDF model if you use a different threshold to define the classes). My view is that that an LDF is an inappropriate way to model this type of data because the categorization of the data discards a huge amount of information.

Here's the caption for Figure 4:

Figure 4. Proposed regions of size/lipophilicity space for an oral drug set, (51) using the effectual combination of Chrom Log D7.4 vs calculated molar refraction (cmr) as a description of chemical space. [It’s actually molar refractivity as opposed to molar refractivity that was calculated. It is unclear what the authors mean by "bRo5 principles".] The highlighted regions suggest likely absorption mechanisms, based on ref (65) with compounds colored by binned NPScout probability scores. [The authors of Y2022 appear to be using a proprietary and undocumented LDF model of unknown predictivity to infer absorption mechanisms (this is what I was getting at in the fourth of the four points points that I raised at the start of the post). The depiction of data shown in Figure 4 would be much more informative had compounds known (as opposed to believed) by to be orally absorbed by one of these mechanisms been plotted in this chemical space.] Below the LDF line, then mean NPScout score is 0.45, (median 0.33) and above it (indicative of likely oral exposure) the mean is 0.31 and median 0.17 (p < 0.01) [It is unclear what (p < 0.01) refers to.]

Here's the caption for Figure 5: 

Figure 5. Illustration of antibiotic drug space, expressed as Calculated Chrom Log D7.4 vs cmr adapted from data in ref (65) colored by antibiotics (circles) and TB drugs (diamonds) which are sized by NP class probabilities and colored by prediction of likelihood of oral exposure (either side of the diagonal “linear discriminant function line” so to be oral, transporters a likely mechanism for the red colored compounds, which mostly have a high NPScout score). [As is the case for Figure 4, the authors of Y2022 appear to be using a proprietary and undocumented LDF model of unknown predictivity to infer absorption mechanisms. Stating that "mostly have a high NPScout score" is arm-waving.]  Vertical (cmr < 8) and horizontal lines (Chrom Log D7.4 < 2.5) together represent likely boundaries for paracellular absorption. [The basis (measured data or belief) for this assertion is unclear. The depiction of data shown in Figure 5 would have been more convincing had compounds known to be and known not to be absorbed by the paracellular route been plotted in this chemical space. While the problems of achieving good oral absorption for antibiotics should not be underestimated, I see getting compounds into cells as the bigger issue and in some cases the transporters cause active efflux (see R2021). The depiction of data shown in Figure 5 would have been much more informative had compounds known (as opposed to believed) to exhibit active influx and active efflux been plotted in this chemical space. Although Figure 5 is presented as a description of antibiotic drug space, the study (ref 65) on which Figure 5 is based is actually focused on antitubercular drug space (one of the challenges to discovery of antitubercular drugs is that Mycobacterium tuberculosis is an intracellular pathogen; see WL2012). One article that I recommend to all drug discovery scientists, especially those working on infectious diseases, is the SM2019 review on intracellular drug concentration.]

The authors suggest:

A logical extension of this hypothesis would be to consider recognition processes with natural molecules, which are likely to have discrete interactions with carrier proteins and therapeutic targets. [The authors do need to articulate what they mean by "discrete interactions" and why "natural molecules" are likely to have "discrete interactions" with carrier proteins and therapeutic targets.] Small molecule drugs are noted to be relatively promiscuous, so making interactions with several proteins is a likely event. (76) [This assertion is not supported by ref 76 which is actually a study of nuisance compounds, PAINS filters, and dark chemical matter in a proprietary compound collection. Promiscuity of a compound is typically defined by a count of the number of targets against which activity exceeds a specific threshold and promiscuity generally increases with the permissiveness of the activity threshold (it’s therefore meaningless to describe a compound as “promiscuous” without also stating the activity threshold). The activity threshold for the analysis reported in ref 76 is ³ 50% inhibition at a concentration of 10 µM which is appropriate if you’re worried about assay interference but, in my view, is at least an order of magnitude too permissive if considering the possibility of off-target activity for a drug in vivo.]  It similarly is logical to consider that a molecule made by a recognition process in a catalytic enzyme may also interact with another protein in a similar manner. (77) [This is not quite as logical as the authors would have us believe since enzymes catalyze reactions by stabilizing transition states. A high binding affinity of an enzyme for its reaction product would generally be expected to result in inhibition of the enzyme by the reaction product.]

Natural Product Fragments in Fragment-Based Drug Discovery

The authors note:

Fragment-based drug discovery (FBDD) can be employed to rapidly explore large areas of chemical space for starting points of molecular design. (91 | 92 | 93) However, most FBDD libraries are composed of privileged substructures of known synthetic drugs and drug candidates and populate already well-explored areas of chemical space, (94 | 95 | 96[I do not consider refs 94-96 to support this assertion (none of these three articles has a fragment screening library design focus and the most recent one was published in 2007).] often through the use of fragments with high sp2-character. (97)  Underexplored areas of chemical space can be rapidly explored by employing fragments derived from NPs that are already biologically prevalidated by evolution. [The authors appear to be suggesting that the physiological effects of natural products are more due to the fragments from which they have been constructed than of the way in which the fragments have been combined.] 

Molecular recognition

The authors state:

That the embedded recognition of natural products for proteins correlates with recognition of the biosynthetic enzyme is an increasingly validated concept. (118 | 119 | 120) [I have no idea what “embedded recognition” means and I’m guessing that the authors might be in a similar position.] The biosynthetic imprint translates to recognition of other proteins using similar interactions. [As I’ve already noted, high binding affinity of a natural product for the enzyme that catalysed its formation would lead to inhibition of the enzyme.] For example, the analysis of protein structures of 38 biosynthetic enzymes gave 64 potential targets for 25 natural products. (121) [Concepts are usually validated with measured data and not by making predictions.]

Conclusions and Prospects for Future Development

The authors assert:

More natural molecules will increase quality through their inherently improved permeability and solubility; [At the risk of appearing pedantic, permeability and solubility are properties of compounds as opposed to molecules. That said, the authors appear to be treating “natural molecules” as occupying a distinct and contiguous region of chemical space by making this claim and it is unclear what the improvements will be relative to. The authors do not present any measured data for permeability or solubility to support their claim.] this is a case of investing time and effort in the early stages of drug discovery to reap rewards with improvements in the later stages through more predictability in trials (and thus a greater chance of success, where quality rather than speed demonstrably impacts (170)) [Many, including me, do indeed believe that investing time and effort in the early stages of drug discovery increases the chances of success in the later stages. However, I would challenge the assertion by the authors of Y2022 that ref 170 actually demonstrates this.] and more sustainable manufacturing methods driven by the transformative power of biocatalysis. (171)

So that concludes my review of Y2022 and thanks for staying with me. I'll leave you with a selfie here in Trinidad's Maraval Valley with my faithful canine companions BB and Coco providing much-needed leadership (a few minutes earlier I had patiently explained to them why ligand efficiency is complete bollocks).