Thursday, 18 October 2012

Screening library size

<< previous || next >>

This post got prompted by a survey of screening library size by Teddy at Practical Fragments. I commented briefly there but it soon became clear that it was going to take a really long comment to say everthing. While at AstraZeneca, I was involved in putting together a 20k screening library for fragment-based work and I thought that some people might be interested in reading about some of the experiences. The project started at the beginning of 2005 although the library (GFSL2005) was not assembled until the folllowing year. Some aspects of the library design were also described in our contribution to the JCAMD special issue on FBDD. The NMR screening libraries with which I've been involved have been a lot smaller (~2k) than this so why would one want to assemble a fragment screening library of this size? One reason was diversity of fragment-based activities, which included a high concentration component when running HTS In screening, you soon learn the importance of logistics and, in particular, compound managment. If you're going to assemble a screening library that can be easily used, this usually means getting the compounds into dimethyl sulfoxide (DMSO) so that samples can be dispensed automatically. Fragments will usually be screened at high concentration which means that stock solutions also need to be more concentrated (we used 100µM) because most assays don't particularly like DMSO. Another reason for maintaining stock solutions is that there are minimum volume limits for dissolving solids in DMSO so starting from solids each time you run a screen is wasteful of material as well as time. The essence of the GFSL2005 project was ensuring that there were enough high concentration stock solutions in the liquid store to support diverse fragment-based activities on different Discovery sites. While the entire library might be screened as a component of HTS, individual groups could screen subsets of GFSL05 that were significantly smaller (at least when I left AZ in 2009). It's important to rememember that while GFSL05 was a generic library we were also trying to anticipate the needs of people who might be doing directed fragment screening. The compounds were selected using Core and Layer (which I've discussed before). This approach is not specific to fragment screening libraries and I've used it to put together a compound library for phenotypic screening. The in house software, some of which had been developed in response to emergence of HTS, was actually in place by the beginning of 1996. Here (from one of my presentations) is a slide that illustrates the approach.
One of the comments on the Practical Fragments post was about molecular diversity and the anonymous commentator asked: "Are some companies more keen because of their bigger libraries or just lazy and adding more and more compounds without looking at diversity?" This is a good challenge and I'll try to give you an idea of what were trying to do. Core and Layer is all about diversity in that the approach aims to drive compound selection away (in terms of molecular similarity) from what has already been selected. However, when designing the library we did try to ensure that as many compounds as possible had a near neighbour (within the library). Here's a slide showing (in pie chart format) the fractions of the library corresponding to each number of neighbours. There are actually three pie charts because counting neighbours depends on the similarity threshold that you use and I've done the analysis for three different thresholds.
 You can do a similar analysis to ask about neighbours of library compounds that are available. This is of interest because you'll generally want to be able to follow up hits with analogues in a process that some call 'SAR-by-Catalogue' although I regard the term as silly and refuse to use it. Availability is not constant and the analysis shown in the following slide was a snapshot generated for a 2008. You'll notice that there is an extra row of pie charts since one can define availability more (> 20mg) or less (>10 mg) conservatively. If you require plenty of sample for availability and require a neighbours to be very similar then you'll have less neighbours.
 There was also some commentary on solubility and ideally this should be measured in assay bufffer for library compounds. We used of our in house high throughput solubility assay when putting GFSL2005 together. This solubility assay had original been designed for assessing hits from conventional HTS and had a limited dynamic range (1-100 µM) which was not ideal for fragments. The assay used standard (i.e. 10mM) stock solutions which meant that we could only do measurements for samples that were in this format (library compounds acquired from external sources were only made up as 100mM stocks). Nevertheless, we used the assay since we believed that the information would still be valuable. The graphic below illustrates the relatonship between solubility and lipophilicity (ClogP) for compounds that are neutral under assay conditions. We used ClogP to bin the data which allowed us to make use of out of range data by plotting percentiles. This allowed us to assess the risk of poor solubily as a function of ClogP. It's worth pointing out that the binning was done so as to be able to include in range and out of range data in a single analysis. We just showed the lowest percentiles because we were only interested in the least soluble compounds. Even so, you should be able to get an idea of the variation in solubility for each bin.
Staying with the solubility theme I'll finish off taking a look at with a look at the aromatic rings and carbon saturation as determinants of aqueous solubility. The two articles at which I'll be looking were both featured in a post at Practical Fragments and, given that the analyses in the two articles were not exactly rose-scented, the title of the post struck me as somewhat ironic. I'll start with Escape from Flatland which introduces carbon bond saturation, defined by fraction sp3 (Fsp3) as a molecular descriptor. Figure 5 is the one most relevant to the discussion and this is captioned, "Fsp3 as a function of logS..." although the caption is not totally accurate. LogS is binned and it is actually average Fsp3 that is plotted and not Fsp3 itself. I'm guessing that the average is the mean (rather than the median) Fsp3 for each logS bin. If you look at the plot there appears to be a good correlation between the mid-point of each logS bin and average Fsp3 although it would have been helpful if they'd presented a correlation coefficient and shown a standard deviation for each bin. In fact, it would have been helpful if they'd shown some standard deviations for the other figures as well but that is peripheral to this discussion. The problem with Figure 5 is that the intra-bin variation in Fsp3 is hidden (i.e. no error bars) and without being able to see this variation it is very difficult to know how strongly Fsp3 (as opposed to its average value) is correlated with logS. Those readers who were awake (it was immediately after lunch) at my RACI talk last December will know what I'm talking about but hopefully some of the rest of you will at least be wondering why the authors didn't simply fit logS to Fsp3. Anyone care to speculate as to what the correlation coefficient between logS and Fsp3 might be?

Impact of aromatic ring count presents a box plot of solubility as a function of number of aromatic rings. At least the variation in solubility is shown for each value of aromatic ring count is shown (even though I'd have preferred to see log of solubility plotted). The authors also looked at the correlation between number of aromatic rings cLogP (which I prefer to call ClogP) and judged it to be excellent (although it is not clear whether they bothered to calculate a correlation coefficient to support their assertion of excellence). Correlations between descriptors are important because the effect of one can be largely due to the extent to which is correlated with another. Although there are ways that you can model the dependence of a quantity on two descriptors that are correlated with each other, the authors chose to do this graphically using the pie chart array in Figure 6. If you look at the pie chart array, you can sort of convince yourself that aromatic ring count has an effect on solubility that is not just due to its effect on ClogP. However, there is established statistical methodology for dealing with this sort or problem. I couldn't help wondering why the authors didn't use this to analyse their data. What puzzled me even more was that they didn't seem to have considered the possiblity that the number of aromatic rings might be correlated with molecular weight (Dan also picked up on this in his post) since I'd guess that this correlation might be even stronger than the one with ClogP. I do believe that a strong case can be made for looking beyond aromatic rings when putting screening libraries and I'll point you to a recent intiative that may be of interest. However, this case is based on considerations of molecular recognition and molecular diversity. I don't believe that either of these studies (which both imply that substituting cyclohexadiene for benzene would be a good thing) strengthens that case. Possibly if the analysis had been done differently I might have arrived at a different conclusion but a lack of error bars and a failure to acccount for the effect of molecular weight leave me with too many doubts.

Literature cited
Blomberg et al, Design of compound libraries for fragment screening. JCAMD 2009, 23 513-525 DOI
Colclough et al, High throughput solubility determination with application to selection of compounds for fragment screening. Bioorg. Med. Chem. 2008, 16, 6611-6616 DOI
Lovering, Bikker & Humblet, Escape from Flatland: Increasing Saturation as an Approach to Improving Clinical Success. J. Med. Chem. 2009, 52, 6752-6756 DOI
Ritchie & MacDonald, The impact of aromatic ring count on compound developability: Are too many aromatic rings a liability in drug design? Drug Discov. Today 2009, 14, 1011-1020 DOI

Monday, 8 October 2012

Viajem à Belém (minha tarefa)

Eu cheguei no Brasil ao fim de abril e tento aprender Português. Minha professor se chama Thalita e eu tenho aula de Português com Mihong, que é coreana. Eu faço um ‘blog post’ de minha viajem à Belém em Português para tarefa. Nao posso usar computador para traduzir mas tenho ajuda para conjugar os verbos irregulars. Esta semana, nós falamos sobre a comida e espero descobrir a receta coreana para cachorro-quente.  Também, eu assisto televisão para ajudar meu compreensivo de  Português e eu gosto do Canal Rural (muitas vacas).

Eu fiz uma palestra ao O IV Simpósio de Simulação Computacional e Avaliação Biológica de Biomoléculas na Amazônia (SSCABBA). O simpósio teve lugar na Universidade Federal do Pará (UFPA) e tenho duas fotos do campus. 

Jerônimo (à esquerda) foi o coordenador do evento e eu gostei muito da palestra de Ernesto (ao lado de Jerônimo).  Ademir (camisa de cor de laranja) fez um curso de Car-Parinello e eu compareci uma das seus palestras.

Minha palestra foi em inglês mas eu fiz uma piada em Português dos Argentinos (los hermanos). Uma molécula de água tem menos energia se ela está longe da superficie hidrofóbico.  Na termodinâmica energia pequena e felidade são equivalentes. Talvez posso fazer esta palestra em Buenos Aires?

Thursday, 20 September 2012

FBLD 2012 Preview

FBLD 2012 is about to happen. Although I'll not be there (and not even nearby), I thought that a preview, like the one posted recently for EuroQSAR (which I also didn't go to) might be in order.

Rod Hubbard will kick things off and he always does a great talk.  Following his contribution will be three talks from Big Pharma.  I've only ever been to one fragment conference and the Big Pharma contributions at that meeting tended to be strategy-heavy and results-light.  Hopefully things will have moved on a bit in the last three and a half years...

Were I attending the meeting, I'd be paying particularly close attention to the membrane protein talks (and might even take the opportunity to ask one of the creators of the Rule of 3 how hydrogen bond acceptors are defined when applying Ro3).  I would also be trying to learn as much as possible about newer technologies for measuring affinity.  Remember that the power of a fragment screening assay is defined by the weakness of the binding that can be detected. Although always desirable, high throughput is a secondary consideration in these assays since one rationale for screening fragments is that one doesn't need to assay as many compounds.  Expect to see at least one speaker get reminded that in SPR molecules are 'tethered' rather than 'immobilised'.

Thermodynamics and kinetics provide the focus of a few of the talks and usually one will hear about the benefits of 'enthalpy-driven' binding and slow off-rates.  My stock question for those who assert the benefits of binding that is 'enthalpy-driven' is, "how do isothermal systems sense enthalpy changes associated with binding?" and I have also made this point in print.  For a fixed Kd reducing the off-rate will also reduce the on-rate and binding kinetics have to be seen in the broader context of distribution.  If the binding is faster than distribution then on-rates and off-rates become irrelevant, except to the extent to which they determine Kd.

At a conference like this, I'd have hoped to see something along the lines of 'unanswered questions and unsolved problems'.  How predictive are fragment properties of the properties of structurally-elaborated molecules?  Just how strong is the correlation of promiscuity with lipophilicity?  Is getting structures for protein-ligand complexes still a bottleneck?

So best wishes for an enjoyable and successful meeting.  Make sure to test the wits of the speakers with some tough questions.  It's character-building for them and loads of fun for everybody else.  Meanwhile here in Brasil it is 'imunização para insetos' at my place and I've cooked up a local response to enthalpy optimisation: Termodinâmica Macumba.

Tuesday, 18 September 2012

Ligand deconstruction

Molecular interactions are of interest in molecular design because the functional behaviour of a compound is determined by how strongly its molecules interact with the different environments in which they exist.  Although I'm talking primarily about non-covalent interactions, reversible covalent bond formation, for example between the catalytic cysteine of a protease and nitrile carbon, can also fit into this framework. Molecular design can be hypothesis-driven or prediction-driven and you'll have guessed from my last post which approach I favor. Hopefully at some point in the future we'll be able to predict well enough to do molecular design and when we do get there I think that we'll find that the models will have a strong physical basis.  Until then,hypothesis-driven molecular design will continue to have an important role.

Molecular interactions are relevant to both prediction-driven and hypothesis-driven molecular design. Design hypotheses are often framed in terms of molecular interactions and a predictive model for affinity that fails to capture the physics of molecular interactions will choke when used outside narrowly-defined congeneric series. Although we think of affinity in terms of contacts between protein and ligand, it is important to remember that the contribution of a particular contact to affinity is not strictly an experimental observable.

In FBDD we think of ligands in terms of their component fragments and are particularly interested in the extent to which the properties of fragments determine the properties of structurally elaborated compounds. Comparing the affinties of ligands with the fragments from which they might have been derived is one way in which this question can be addressed and one will occasionally encounter the term 'deconstruction' in the FBDD field. Just as you need to be careful how you link fragments when assembling a ligand, you also need to be careful how you decompose a ligand into component fragments. In this blog post I'm going to use a well-known deconstruction study to highlight some of the things that you need to think about when deconstructing.

I like to think of molecular interactions in terms of generic atom types such as 'neutral hydrogen bond acceptor' or cationic hydrogen bond donor'. This is a pharmacophoric view of molecular recognition which is also relevant to bioisosterism and scaffold-hopping. Those who take a more physical view of molecular recognition would say that pharmacophoric atom-typing is just cheminformatics and somewhat uncouth. However, you can capture a lot of physics with atom-typing and it's not like placing atomic charges on nuclei is such great physics anyway...

When deconstructing a ligand molecule you want to minimise changes to the way in which a binding site might see atoms in the ligand. For example breaking the carbon-nitrogen bond of an amide and adding hydrogens is not a great idea because you’ll turn hydrogen bond donor into a cation (at physiological pH) and a strong hydrogen bond acceptor into a weaker one. Deconstructions that add or remove hydrogen atoms from nitrogen or oxygen atoms are usually not a good idea.   In the featured ligand deconstruction study, fragments 2 and 3 were derived structure from 1. The acyl sulfonamide group of 1 would be expected to be predominantly deprotonated under normal physiological conditions (a pKa of 5.4 has been reported for sulfacetamide). In contrast, fragment 2 would be expected to be predominantly neutral at normal physiological pH (benzenesulfonamide pKa is 10.1). This means that the deconstruction of the acylsulfonamide transforms an anionic nitrogen into a neutral one that is bonded to a donor hydrogen.  This makes it difficult to draw conclusions from the observation that the fragment does not bind to the target. Is the interaction between the relevant part of the parent ligand very weak or has the deconstruction changed the pharmacophoric character of interacting atoms?

The deconstruction of 1 to 3 effectively creates a cationic center (a pKa of 5.1 has been reported for dimethylaniline) and having two nitrogen atoms in the piperazine ring does introduce complications.  Two pKa values are observed for piperazine and in a recent study these were found to be 9.7 and 5.5 at 298K.  This tells us that protonation of one of the nitrogen atoms makes it more difficult to protonate the other one (which makes sense).  These measured pKa values also tell us that piperazine will exist predominantly as a monocation at normal physiological pH and the corresponding values for 1,4-dimethylpiperazine are 8.4 and 3.8.  If you take a look at the source that I used for the benzenesulfonamide pKa, you'll see that attaching a phenyl ring to a carbon that is bonded to a basic nitrogen will make that nitrogen less basic by about one log unit. Bringing this all together for compound 1 suggests that protonation of piperazine will occur preferentially at the left hand nitrogen (see figure above) and the that the relevant pKa will be about 7.4. Deconstruction to fragment 3 is expected to shift the preferred site of protonation to the other nitrogen.

So what is the protonation state of piperazine when compound 1 binds to its target? The closeness of the likely pKa to normal physiological pH makes it difficult to say and if the relevant proton/lone-pair is directed away from the protein surface then cationic and neutral forms may have similar affinity. At this point, I should mention that I couldn't find the value(s) of the pH at which the NMR experiments were performed (if this information is indeed there, I'll invoke the 'reading PDF on my computer defense') and the information really needs to be communicated in a study such as this one.

I'll mention a couple of other deconstructions to illustrate the point that changing an element may sometimes result in less perturbation of the relevant substructures.  Fragment 4 gets round the problem of deconstruction shifting the preferred site of protonation. The nitrogen atom in the parent molecule that is mutated into carbon will be a weak hydrogen bond acceptor because it is linked directly to an aromatic ring. It can be argued that mutating a weak hydrogen bond acceptor into a hydrophobic atom represents a smaller perturbation than mutating it into a cationic center. However, piperidine is more basic than piperazine so there will be less neutral form (which may or may not be relevant). Deconstruction to fragment 5 preserves (actually is likely to strengthen) the hydrogen bond acceptor character of the less basic piperazine nitrogen but is likely to decrease the amount of cationic form because morpholine is less basic than piperazine.

Hopefully this will have got you thinking in a bit more depth about ligand deconstruction and I'll finish off with a cartoon of how we might use deconstruction in lead optimisation.  First we check that we can can actually measure affinity for a fragment that is obtained by deconstructing the lead compound.  Then we assemble SAR (could be a good way to explore bioisosteric replacements) before incorporating the best fragments into the lead structure.  Essentially, the fragment assay allows us to assemble SAR in a more accessible region of chemical space. 

Literature cited 

Barelier, Pons, Marcillat, Lancelin, Krimm, Fragment-Based Deconstruction of Bcl-xL Inhibitors.  J. Med. Chem. 2010, 53, 2577-2588. DOI

Cabot, Fuguet, Ràfols, Rosés, Fast high-throughput method for the determination of acidity constants by capillary electrophoresis. II. Acidic internal standards. J. Chromatography A 2010, 1217, 8340–8345. DOI

Milletti, Storchi, Goracci, Bendels, Wagner, Kansy, Cruciani, Extending pKa prediction accuracy: High-throughput pKa measurements to understand pKa modulation of new chemical series. Eur. J. Med. Chem. 2010, 45, 4270-4279. DOI

Fickling, Fischer, Mann, Packer & Vaughan, Hammett Substituent Constants for Electron-withdrawing Substituents : Dissociation of Phenols, Anilinium Ions and Dimethylanilinium Ions. JACS 1959,81, 4226-4230. DOI

Khalili, Henni, East, pKa Values of Some Piperazines at (298, 303, 313, and 323) K. J. Chem. Eng. Data 2009, 54, 2914-2917. DOI



Monday, 20 August 2012

QSAR: Nailed to its perch?

I must confess that I’ve never been a big fan of QSAR. When I started in Pharma 24 years ago, QSAR was seen as as something that would solve all our problems and, over the years, a number of other panaceas would follow in its wake. I find it useful to classify molecular design as either hypothesis-driven or prediction-driven and will discuss this a bit more in a future post. QSAR fits into the prediction-driven category and, to get you thinking a bit about the subject, I'll share a couple of slides from my RACI talk last December.

So EuroQSAR is due to happen again and this time there'll be a session to commemorate QSAR's founding father Corwin Hansch, who died last year.  So will 'Grand Challenges for QSAR' deliver?  Were I going to be there, I'd be checking out Maggiora's talk (Activity Cliffs, Information Theory, and QSAR) since people in the field really need to start thinking more about QSAR in terms of relationships between structures.  Although it's not part of the Hansch session, I'd also be checking out 'The Power of Matched Pairs in Drug Design' by my good friend (and former colleague) Jonas Boström since Matched Molecular Pairs represent one way to recognise and articulate relationships between structures.  And of course I wouldn't miss the Hansch Awardee's talk, the title of which reminded me of an Austrian who struggled, although you won't find that one stocked in the local book shops...

I would like to have seen something on training set design and validation in the 'Grand Challenges' session.  Generally building and validating multivariate models work best when the compounds are distributed evenly in the relevant descriptor space.  Clustering in descriptor space can result in validation giving an optimistic view of model quality and that's one way to end up over-fitting yout data.  Maybe this was one Grand Challenge that the Organising Committee just didn't have the stomach for...

So that's all from me for now.  Why not print out 'QSAR: dead or alive?' (it infuriates those who would seek to lead your opinion) to read on the plane and think up some nasty questions on validation for the experts while waiting in Passkontrolle?

Literature Cited

Doweyko, QSAR: dead or alive? JCAMD 2008, 22, 81-89 DOI

Thursday, 9 August 2012

Lipophilicity and Pharmacological Promiscuity

We often hear and read that pharmacological promiscuity is strongly correlated with lipophilicity and it's often a good idea to check that the dogma matches the data. I set up a poll (which closed just over a week ago) for the LinkedIn FBDD group that asked the question:

Is pharmacological promiscuity strongly correlated with lipophilicity?

And the responses were: Yes (43 votes), No (5 votes) and Don't Know (2 votes)

Although the poll is now closed, we're still discussing the results so why not come and join the fun.

Sunday, 29 July 2012

Imidazole lipophilicity revisited

It's almost three months since I arrived in Brasil to spend year in São Carlos at Universidade de São Paulo. One of the advantages of being back in academia is that access to literature is a lot better and I managed to dig up some cyclohexane/water partition coefficients for a couple of the compounds that featured last year's Lipophilicity Teaser. While octanol/water doesn't appear to 'see' the hydrogen bond donor that is unmasked by moving the methyl group, the cyclohexane/water partitioning system certainly does. Readers with an interest in Physical-Organic Chemistry might like to think about how tautomerism might affect partition coefficients. There are also a couple of lipophilicity-based items (logP versus logD for ADMET discussion and poll on correlation of pharmacological promiscuity) that are current in the FBDD LinkedIn group so why not drop by and join the fun there.


Literature cited

Abraham, Chadha, Whiting & Mitchell, Hydrogen bonding. 32. An analysis of water-octanol and water-alkane partitioning and the Δlog P parameter of Seiler. J. Pharm. Sci. 1994, 83, 1085-1100. DOI

Radzicka & Wolfenden, Comparing the polarities of the amino acids: side-chain distribution coefficients between the vapor phase, cyclohexane, 1-octanol, and neutral aqueous solution. Biochem. 1988, 27, 1664-1670. DOI

Saturday, 26 May 2012

How not to do FBDD?

It’s not often that bloggers get responses from the authors of the articles that they’ve featured and it’s much rarer still for a reviewer of a manuscript to break cover and join the discussion. However, that is exactly what happened when an article on (Trimethoprim-resistant, Type II R67) Dihydrofolate Reductase got coned in the searchlights at Practical Fragments. Soon it had also been Pipelined as well and the matter would normally have ended there. At this point both the corresponding author and one of the reviewers of the manuscript got in touch with both Teddy and Derek, which prompted a follow up post at Practical Fragments and two in the Pipeline ( 1 | 2 ). I have a particular interest in the relationship between journals and blogs and thought it would be a good idea to see what all the fuss was about.

The article describes a fragment screen (using a biochemical assay) against the target and the synthesis and characterisation of two inhibitors based on one of the fragment hits. None of the IC50 values measured for the fragment hits was less than 1.7mM and I was curious why the authors ran the assay at 30 x Km for NADPH since this (presumably) meant that fragments needed to be screened at higher concentrations. Although many in the FBDD community are dismissive of the use of biochemical assays for fragment screening, I believe that in some situations they do represent a viable way forward, but only if appropriate precautions are taken. The authors do hint at attempts to deal with interference (e.g. reducing path length) but I’d like to have seen some evidence that they were fully aware of the problem and had taken the appropriate steps to recognise and, if necessary, correct for interference. Some discussion of intracellular co-factor and substrate concentrations would also have been helpful since it would help to place the IC50 and Ki values for the structurally elaborated compounds in the appropriate context. Some readers may have encountered this issue when comparing assay results for ATP-competitive kinase inhibitors using different ATP concentrations.

Putting my concerns about the assay aside, I do have some issues about how the fragments were selected for assay. We screen fragments to ask questions about the structural requirements for binding and you don’t always need a large screening library in order to ask good questions. Once you’ve got hits, it’s important to follow these up with carefully chosen analogues to build SAR that can (hopefully) be mapped onto synthetically elaborated structures. This is especially critical when the hits are weakly active by a biochemical assay and neither crystal structures for bound fragments nor an orthogonal assay are available. Given the results presented in Figure 1 and Table 1, I was really surprised that they didn’t assay the unsubstituted 2-naphthoic acid. While assaying fragments 4a and 4c represents a step in the right direction for following up fragment 4, I’d be looking for something along the lines of the following scheme from researchers who were thinking in terms of molecular recognition:

In my view, the authors didn’t demonstrate any real insight in the way that they selected the compounds for their fragment screening library or how they explored fragment SAR. Although they had a protein structure available for the target, they don’t seem to have used this for selection of fragments for screening. The execution of the fragment screen and its follow up would be less of an issue had the authors managed to get crystal structures for fragment-protein complexes or discovered a series of highly active compounds with interesting SAR that killed bacteria. However, all the authors really have to show for having run this fragment screen is two inhibitors with two dianionic inhibitors of low ligand efficiency. I couldn’t help wondering why linking fragments led to such a catastrophic loss of ligand efficiency. I also couldn't help wondering why all the ligand efficencies were negative and in the wrong units since you'd have expected the reviewers (there were at least two of them) to pick this sort of thing up.

The process by which the authors used the output of the fragment screen to ‘design’ the elaborated molecules is especially deserving of comment. The imidazole carboxylic acid was selected as a starting point for elaboration because it was more selective with respect to the human enzyme. Personally, I’d be wary of placing too much weight on selectivity given the possibility of interference in these assays but I’d certainly be prepared to put my concerns to one side if the elaboration process had led to some really active compounds that were indeed selective. The idea of linking two carboxylic acids appears to have been inspired by the structure of Congo Red. Dyes typically need to stick to the organic material that they are used to color and in the early days of high throughput screening, we’d see a lot of dyes like this coming up as active. I was surprised that the authors chose to link from the carbon between the imidazole nitrogen atoms because the fragment with a methyl substituent at this position showed no inhibition at 10mM and this is precisely the sort of result that is invoked to support the assertion that fragment 4 is selective with respect to the human enzyme. However, I’d like to get back to the structure of Congo Red...

Congo Red is actually a pretty rigid beast and can only exist in extended conformations with each half of the molecule tending to coplanarity. If you’re going to use Congo Red as a template for design then it really would be a good idea to check that your designed molecules really do overlay with the template. It really isn’t rocket science. At this point, it really should be mentioned that the authors had got to their inhibitors without any of what I’ll call geometric modelling. This might have been more acceptable if the compounds had been more potent, more numerous and actually killed bacteria. Reading this article caused me to dream up new efficiency metric for medicinal chemistry journal articles in which the sum of the pIC50 values for all the compounds in the article is divided by the number of pages in the article...

This is probably a good point to note that the authors do not appear to have made any real use of the crystal structure of the target either to select fragments or in the design of the elaborated molecules. Therefore, I was somewhat surprised to see over two pages of the article devoted to docking study of the inhibitors. They appeared to have some difficulty in reproducing the (crystallographically determined) binding mode of the NADPH ligand using the docking software so a reader could legitimately ask why she or he should believe the proposed binding modes for the other ligands. One common feature of the predicted binding modes is that the molecules are ‘folded into a U shape’ and you can see these in Figure 4. Bending the molecules like these into ‘U shapes’ costs energy and many (most) docking programs don’t account for conformational energy costs particularly well. Also, Congo Red (which appears to have provided the basis for the dicarboxylic acid inhibitors) can’t bend into a ‘U shape’ so if these inhibitors really are binding in the proposed manner then they can’t really be functioning as the mimics of Congo Red that they were 'designed' to be. Also the authors note that some other researchers have proposed ‘a linear mode of binding’ (whatever that means) in which the molecules do not bend into ‘U shapes’. I couldn’t help wondering why they’d not bothered to dock Congo Red...

Neither of the inhibitors showed a measurable effect on bacteria even when tested at 1mM and the authors suggested ‘that these compounds do not penetrate into E. Coli’. This may well be the case but potency also needs to be seen in the context of the physiological concentrations of both substrate and co-factor. The authors claim selectivity for the compounds 8 and 9 with respect to human DHFR although the maximum test concentrations used were both less than ten-fold above the relevant IC50 values for the R67 enzyme.

The authors presented cytotoxicity data for compounds 8, 8a, 8b and 9 and noted that all except 8b showed weak cytotoxicity. It’s worth noting that the IC50 (34 µM) for 8a in the cytotoxicity assay is almost two-fold lower than the IC50 for 8 against the target enzyme. Compounds 8 and 9 may only be weakly cytotoxic but they still appear to kill the 3T6 cells in the cytotoxicity assay more efficiently than they kill bacteria. Apparently 8b has some antiproliferative effects against a number of parasites although it is not clear what relevance this has to R67 DHFR inhibition since this compound shows no measurable activity against this enzyme..

That brings me to the end of my review of this article and it’s now time to sum up. The authors conclude their article by stating:.

‘This work provides inspiration for the design of the next generation of inhibitors.’.

and I believe this statement is as inaccurate as it is immodest. I thought that the execution of the fragment screen, in particular the exploration of fragment SAR, was weak. The ‘design’ didn’t hang together (e.g. linking benzimidazole from C2 where methylation led to decrease in inhibition by fragment) and may have been overly-influenced by some of the authors’ interest and previous experience in the synthetic chemistry of linked benzimidazoles. The docking studies appeared to have been done after the compounds had been ‘designed’ and synthesised and in my view, contributed absolutely nothing. I suggest the authors, reviewers and editor of this article all read (and make sure that they understand) the recent J. Med. Chem. guidelines on computational medicinal chemistry. All of this would have been less of an issue if the authors had actually discovered a novel series of R67 DHFR inhibitors that actually killed bacteria. However, the two compounds, which they optimistically describe as a class, that they’ve discovered are unexciting from the pharmaceutical perspective. Perhaps if they'd pursued the fragments a bit more imaginatively and done some real design they might have come up with something more exciting that was actually anti-bacterial. However, that is speculation and I believe that the Journal's readers and subscribers deserve better than this.

To be quite honest, I wouldn't normally review an article like this and only did so because of the reaction(s) generated by the original blog posts and my interest in the relationship between blogs and journals. A specialist scientific journal needs to shape the debate that drives its field forward and create an environment in which articles are discussed publicly. The authors of those articles should be active participants in the discussion. I'll finish with one of the comments from Derek's second post:

'I agree the online discussion about the article is great, but it does need to be more tightly linked to the original publication. No reason why the original publishers can't allow public comments.'

Earth calling Journal Editors, the ball's now in your court and we'd all really love to hear what you've got to say.

Literature cited.

Bastien, Ebert, Forge, Toulouse, Kadnikova, Perron, Mayence, Huang, Eynde & Pelletier, Fragment-Based Design of Symmetrical Bis-benzimidazoles as Selective Inhibitors of the Trimethoprim-Resistant, Type II R67 Dihydrofolate Reductase. J. Med. Chem., 2012, 55, 3182–3192 | DOI.

Mayence, Pietka, Collins, Cushion, Tekwani, Huang & Eynde, Novel bisbenzimidazoles with antileishmanial effectiveness Bioorg. Med. Chem. Lett. 2008, 18, 2658–2661 | DOI

Stahl & Bajorath, Computational Medicinal Chemistry. J. Med. Chem., 2011, 54, 1–2 | DOI.

Tuesday, 20 March 2012

Scoring non-additivity

Getting the most from target-ligand interactions is the essence of structure-based design and many of us believe that fragment-based methods represent an excellent way to do this. Recently, I came across an article that looked at cooperativity and hot spots in the context of scoring functions. Given that the Roche group appear to be trying to establish themselves as authorities on molecular recognition, (see review of this article) I decided to take a closer look.

We often think of molecular recognition in terms of contacts between molecules with each contact contributing (sometimes unfavourably) to the change in free energy associated with formation of complex. Although this is an appealing picture (and one we use in scoring functions as well as some QSAR approaches to affinity prediction) it is worth remembering that the contribution of an individual contact to affinity is not strictly an observable or something that you can actually measure.

Reading the article, I encountered the following in the introduction:

‘Classical experiments by Williams et al. on glycopeptides antibiotics18 and on the streptavidin-biotin complex19 have shown that binding causes these systems to be more rigid and thus enthalpically more favorable. The loss of binding entropy caused by the reduced motion is more than compensated by the gain in enthalpy achieved through tighter interactions.’

I would challenge the assertion that making a system more rigid necessarily results in a decrease in its enthalpy. If you took a complex and rigidified it in the wrong way you would actually increase its enthalpy to the extent that it exceeded that of the unbound complexation partners so it’s probably worth taking a closer look at rigidity in the context of binding before moving on. When people try to ‘explain’ affinity in terms of structure it usually results in a lot of arm waving which typically increases when entropy and enthalpy enter the equation. One way to think about entropy is in terms of constraints. As you increase the extent to which a system is constrained you decrease its entropy. When a ligand binds to a protein it becomes constrained in a number of ways. Firstly, it has constrained to move with the protein and introducing this constraint results in a loss of translational entropy. Secondly, binding restricts the conformations available to the ligand and we can think of the ligand as being constrained to exist in a subset of the conformations in which it could exist before binding. We can think of the protein as constraining the conformational space of the ligand by modifying the potential energy surface of the ligand. Entropy changes associated with binding quantify changes in the degree to which the system (protein, ligand and solvent) is constrained. As a system becomes more constrained it moves less and it is more correct to say that the additional degree of constraint ‘causes’ the reduced motion than to say that the reduced motion ‘causes’ the reduction in entropy.

However, I don’t want this post turning into Thermodynamics 101 and should get back to the featured article. The scoring functions used to assess potential binding modes (poses) typically include a weighted sum of the contacts between between protein and potential ligand. Setting these weights appropriately is the biggest problem in development of scoring functions of this nature. The authors of the featured article attempt to go beyond an additive model by using subgraph network descriptors to capture cooperativity. Many of the ligands, protein structures, measured potencies and affinities used in this study appear to be proprietary. Personally, I found it difficult to figure out exactly what the authors had done and I would challenge anybody to repeat the work. An editorial (admittedly in Journal of Medicinal Chemistry rather than Journal of Chemical Information and Modeling) is very clear on this point and the authors might want to check it out before submitting further manuscripts on this theme:

‘Computational methods must be described in sufficient detail for the reader to reproduce the results’

Molecular interactions form the basis of this work and the authors ‘propose a comprehensive set of attractive and repulsive noncovalent interactions’ for which the atom types are defined using SMARTS notation. Since the authors don’t actually tell us what SMARTS patterns they use to define the interaction types, Tables 1 and 2 raise more questions than they answer. Does the hydrogen bond interaction include charged donors or acceptor and, if so, are ions in contact also considered to be hydrogen bonded? What is the difference between a dipolar interaction and a hydrogen bond? Why do dipoles interact with cations but not anions? If the cation-π interaction is so favourable, why is the anion-π interaction considered as insignificant? You can claim

‘a newly defined comprehensive set of noncovalent contacts that encode state-of-the-art knowledge about molecular interactions’

but if you’re not going to tell us how you do the encoding, readers are going to be left with the impression that the statement is as inaccurate as it is immodest.

Using a single pairwise interation coefficient to score all hydrogen bonds severely limits the accuracy of an interaction-based scoring function, especially if hydrogen bonds involving charged groups are to be included. Whilst cooperative effects are relevant to the scoring of hydrogen bonds, the inherent variability of individual hydrogen bonds is likely to be as least as important. One way to make hydrogen bonds pay their way in aqueous media is to balance the strengths of donor and acceptor. If an acceptor is ‘too weak’ for the donor, the former is unable to fully ‘compensate’ the latter for its ‘lost solvation’. I’ve discussed this in the introduction to a 2009 article on hydrogen bonding if you’re interested in following this up a bit more and you might also want to look at these measured values of hydrogen bond acidity and basicity measurements for diverse sets of donor and acceptors.

Although values of hydrogen bond acidity and basicity measured for prototypical (e.g. single donor or acceptor) model compounds are useful in molecule design, they don’t tell the whole story. The environment (e.g. at bottom of narrow hydrophobic pocket) of a hydrogen bonding group in a protein can greatly affect its recognition characteristics. It may yet prove possible to model hydrophobic enclosure accurately in terms of cooperative contacts between protein and ligand but I'll not be holding my breath. There does seem to be a view that the contribution of hydrogen bonds to affinity will be small and my former Charnwood colleagues assert that a neutral-neutral hydrogen bond will contribute no more than a factor of 15 (1.2 log units). However, at Alderley Park (where we never used to pay too much attention to directives from ‘Camp Chuckles’) we managed to find a couple of examples where replacing CH in a pyridine ring with nitrogen resulted in potency increases of 2 log units. I guess the jury is still out on this one.

The authors of the featured article used their interaction types and subgraph network descriptors to develop a new scoring function which they call ScorpionScore (SScorpion). They assembled four training sets and

‘maximized the Spearman rank correlation coefficient for affinity data sets I–III and minimized deviations in absolute affinity differences for training set IV in parallel, with each data set being weighted by 25%’.

It was not clear why they split the data that comprise the sets I-III in this manner but it will have the effect of slightly emphasising the contribution from set I, which is smaller (31) than the other two sets (II: 46; III: 42). It would have been helpful to explicitly state the function that they’d optimized and I’d also like to have known exactly how they removed

highly correlated (Spearman rank correlation ρ > 0.8) terms from the set’.

The performance of the Scorpion scoring function is summarised in Table 7 of the article. There only seems to be an advantage in using the network terms for training sets I (Neuraminidase) and IV (Activity Cliffs). Significantly, the performance of SScorpion is no better than a simple count of heavy (non-hydrogen) atoms for data set III (Diverse) which one might argue is the most relevant to virtual screening against and arbitrary target. The results presented in Table 7 quantify how well the different scoring schemes fit the training data and one would expect that using the network terms (SScorpion) will be better than not using the network terms (Spairwise) simply because more parameters are used to do the fitting. The authors do compare SScorpion with other scoring functions (Tables S4 – S6 in the supplemental material) although they did not make the crucial comparisons with Spairwise in this context. In my opinion, the authors have not demonstrated that the network terms significantly increase the performance of the scoring and I would question the value of the interaction network diagrams (Figures 8 -16).

I’ll finish off by making some comments on the (mostly proprietary) assay results used to develop SScorpion. The authors state:

‘It is important to note that we optimize against a combination of training sets, in which for each set ligand affinities were determined with the same assay and for the same protein.’

I would certainly agree that mixing results from different assays is not a good idea when attempting to relate structure to activity. However, it is also important to note that the term ‘affinity’ strictly refers to Kd and this is not the same quantity as IC50 (which generally depends on substrate and co-factor concentrations). For example, you can account for intracellular ATP concentrations by running kinase inhibition assays at high ATP concentration. However, there is another issue that one needs to consider when dealing with IC50 values and I’ll illustrate it with reference to the compounds shown in the figure at the top of this paragraph which I’ve pulled from row 13 of Table 5. The pIC50 values shown in this figure are certainly consistent with cooperative binding of the two chlorine atoms. For example, the effect of adding a 2-chloro substituent to 1 (ΔpIC50 = 5.6 – 4.4 = 1.2) is a lot smaller than that observed (ΔpIC50 = 8.0 – 5.8 = 2.2) when 2 is modified in an analogous manner. When trying to quantify cooperativity it is very important that all IC50 values are measured as accurately as possible and the assay often needs to have a high dynamic range. Our perception of cooperativity for these compounds depends very much on the pIC50 for compound 1 being measured accurately and assay interference increasingly becomes an issue for weaker inhibitors. My main reason for bringing this up is that it may be possible to quantify the degree of interference in order to make the appropriate corrections and this blog post may also be of interest.

As stated earlier, I do not believe that the authors have shown that network descriptors add anything of real value. For the largest training set(II: PDE10), the both SScorpion and Spairwise are outperformed by a single parameter (number of heavy atoms). Some readers may think this assessment overly harsh and I'd ask those readers to consider an alternative scenario in which QSAR models with different numbers of parameters were ranked according to how well they fit the training data. Thinking about a scoring function as a QSAR model may be instructive in other ways. For example, QSAR models that are optimistically termed 'global' may just be ensembles of local models and to equate correlation with causation is usually to take the first step down a long, slippery slope.

Literature cited

Kuhn, Fuchs, Reutlinger, Stahl & Taylor, Rationalizing Tight Ligand Binding through Cooperative Interaction Networks. J. Chem. Inf. Model. 2011, 51, 3180–3198 DOI

Bissantz, Kuhn & Stahl, A Medicinal Chemist’s Guide to Molecular Interactions. J. Med. Chem. 2010, 53, 5061–5084 DOI

Stahl & Bajorath, Computational Medicinal Chemistry. J. Med. Chem., 2011, 54, 1–2 DOI

Kenny, Hydrogen Bonding, Electrostatic Potential, and Molecular Design. J. Chem. Inf. Model., 2009, 49, 1234–1244. DOI

Abraham, Duce , Prior , Barratt , Morris & Taylor, Hydrogen bonding. Part 9. Solute proton donor and proton acceptor scales for use in drug design. J. Chem. Soc., Perkin Trans. 2, 1989, 1355-1379 DOI

Abel, Wang, Friesner & Berne, A Displaced-Solvent Functional Analysis of Model Hydrophobic Enclosures. J. Chem. Theory Comput. 2010, 6, 2924–2934 DOI

Davis & Teague, Hydrogen Bonding, Hydrophobic Interactions, and Failure of the Rigid Receptor Hypothesis. Angew. Chem. Int. Ed. 1999, 38, 736-749 DOI

Bethel et al, Design of selective Cathepsin inhibitors. Bioorg. Med. Chem. Lett. 2009, 19, 4622–4625. DOI

Shapiro, Walkup & Keating, Correction for Interference by Test Samples in High-Throughput Assays. J. Biomol. Screen. 2009, 14, 1008-1016 DOI

Saturday, 4 February 2012

JCAMD 25th Anniversary Issue

The editors of the Journal of Computer-Aided Molecular Design commissioned a number of Perspectives on the state and future of the field to commemorate the journal's 25th anniversary. They have made this content open access for a limited period (I believe 3 months) so go check it out while the access is still open.

Friday, 3 February 2012

Fragment lead identification by SPR

FBDD is a maturing field and one sign this maturation is the publication of a volume of Methods in Enzymology devoted to the subject. The article in this collection that most interested me was the review by Anthony Giannetti on the use of Surface Plasmon Resonance (SPR) in Fragment Lead Generation. The review is described as a ‘comprehensive walk-through’ and in-depth treatment of topics such as target immobilization and buffer/compound preparation justifies this description. I’m still working my way through some of the data analysis sections...

The target is tethered to a surface in SPR and this is usually referred to as ‘immobilization’, which is an unfortunate term, albeit the one that is most commonly used in the literature. Vendors of competing assay technologies (who would naturally prefer you to use their technology instead) often present this as a weakness of SPR. One concern is that tethering will compromise the ability of the target to bind ligands and the review does cite a couple of articles which compare affinities measured with SPR to those measured using methods such as isothermal titration calorimetry.

The system in an SPR assay is heterogenous, which is another way of saying that the concentration of protein is not uniform, particularly in the direction perpendicular to the surface to which it is tethered and this creates some interesting possibilities. Tight binding occurs when the value of the ligand Kd is lower than the concentration of the protein to which it binds. We typically configure assays for measuring affinity and potency so that ligand concentration is significantly greater than protein concentration. This means that ligand binding does not affect the concentration of unbound ligand and the math is a whole lot easier if you can make this assumption. If, however, the protein concentration in your assay is 1nM and you want to measure the potency for a compound with an IC50 of 0.01nM you’re going have a problem because you’ll need the compound at a concentration of 0.5nM in order to occupy half the binding sites. In enzyme inhibition assays, the concentration of the enzyme limits sets an upper limit on the potency that you can measure and this may be an issue for attempts to estimate the maximum potency of ligands.

In a heterogeneous system, things are not quite as simple because concentration is less clearly defined and you need to think in terms of quantities (in molar terms of course) of protein and ligand. Localising a small amount of protein on the chip surface rather than having a larger amount of protein distributed evenly throughout the sample volume means less depletion of the reservoir of unbound ligand when 50% of binding sites become occupied. Also in the SPR assay, the solution of ligand flows over the chip, making depletion of unbound ligand even less of a problem.

Tight binding is not usually a problem when screening fragments and the main reason for bringing up the subject was to get you thinking a bit about assays. There are a number of technologies for detecting the binding of fragments and quantifying the affinity with which they bind. This raises a couple of questions. Firstly, to what extent do we need new screening technologies for FBDD? Secondly, which weaknesses in the current methodology should be addressed with the highest priority?

Literature cited

Giannetti, From experimental design to validated hits: A comprehensive walk-through of fragment lead identification using surface plasmon resonance. Methods Enzymol. 2012, 493, 169-218. DOI

Saturday, 28 January 2012

Anyone for tennis?

So I'm back in Melbourne, one of my favorite cities, and am currently amusing myself by looking for transition states which some of you will know are merely first order saddle points on potential energy surfaces. I actually find reactivity a lot more interesting than binding so it's great that my hosts are interested in these problems. Of course there's a lot more to do in Melbourne than searching for imaginary forces and negative frequencies and last week I went to watch the Australian Open.

Before I get to the tennis, I'll mention the Research Works Act which can be seen as protectionism (of academic publishers) and readers of this blog will know that I have some issues with journal editors. Well it turns out that protectionism is everywhere even in what you would think would be the fully competitive environment of the Australian Open. When you attend the Open, any bag you take in is (quite rightly) inspected and they'll be looking carefully for particularly dangerous items. Like lenses with a focal lengths greater than 200mm. Why are these considered so dangerous, you might ask? Well the danger posed by such contraband is that it might allow its owner to take a really good picture. While the organisers of the Open want you to enjoy the tennis and buy lots of food and drink, the organisers of the Open do not want you to take good pictures. Anyway here's a picture of the official photographers whom the organisers of the Open are 'protecting' from me and my Pentax K200.

The photos are from the match between Maria Sharapova and Angelique Kerber and I'd hoped that it would turn into an epic struggle of Kurskian proportions. As it transpired, Kerber's Panzerfausts were no match for Sharapova's Katyusha batteries. I couldn't help wondering if Maria had modelled her famous 'vocals' on the unique music of Stalin's Organ.

The match was not without its lighter moments and Maria aborted one serve to entertain us with a dance move from Swan Lake that would not have disgraced the Bolshoi. However, it was soon clear that poor Angelique was not having a good day in the office and, after she bashed yet another ball into the net, I was not going to contradict her. On the bright side, at least she didn't try to blame the Romanians or start raving about Steiner's division.

As had always seemed probable, the match concluded in two sets and at the post-match interview, Maria demonstrated her familiarity with the principles and practise of High-Throughput Screening.