Sunday, 25 January 2015

Molecular complexity in KL (Jan 2014)

I was in Kuala Lumpur about this time last year and visited International Medical University where I delivered a harangue.  It was a very enjoyable day and the lunch was excellent (as is inevitable in Malaysia where it seems impossible to find even mediocre food).  We discussed molecular complexity at lunch and, since a picture says a thousand words, I put the place mat to good use.

Wednesday, 21 January 2015

It's a rhodanine... fetch the ducking stool


| next >>

So I promised do a blog post on rhodanines

This isn’t really a post on rhodanines or even PAINS.  It’s actually a post on how we make decisions in drug discovery.  More specifically, the post is about how we use data analysis to inform decisions in drug discovery. It was prompted by a Practical Fragments post which I found to be a rather vapid rant that left me with the impression that a bandwagon had been leapt upon with little idea of whence it came or whither it was going.   I commented and suggested that it might be an idea to present some evidence in support of the opinions presented there and my bigger criticism is of the reluctance to provide that evidence.  Opinions are like currencies and to declare one’s opinion to be above question is to risk sending it the way of the Papiermark.

However, the purpose of this post is not to chastise my friends at Practical Fragments although I do hope that they will take it as constructive feedback that I found their post to fall short of the high standards that the drug discovery community has come to expect of PracticalFragments.   I’ll start by saying a bit about PAINS which is an acronym for Pan Assay INterference compoundS and it is probably fair to say that rhodanines are regarded as the prototypical PAINS class.  The hydrogen molecule of PAINS even?  It’s also worth stating that the observation of assay interference does not imply that a compound in question is actually interacting with a protein and I’ll point you towards a useful article on how to quantify assay interference (and even correct for it when it is not too severe).  A corollary of this is that we can’t infer promiscuity (as defined by interacting with many proteins) or reactivity (e.g. with thiols) simply from the observation of a high hit rate.  Before I get into the discussion, I’d like you to think about one question.  What evidence do you think would be sufficient for you to declare the results of a study to be invalid simply on the basis of a substructure being present in the molecular structure of compound(s) that featured in that study?

The term PAINS was introduced in a 2010 JMC article about which I have already blogged.  The article presents a number of substructural filters which are intended to identify compounds that are likely to cause problems when screened and these filters are based on analysis of the results from six high throughput screening (HTS) campaigns.  I believe that the filters are useful and of general interest to the medicinal chemistry community but I would be wary of invoking them when describing somebody’s work as crap or asserting that the literature was being polluted by the offending structures.   One reason for this is that the PAINS study is that it is not reproducible and this limits the scope for using it as a stick with which to beat those who have the temerity to use PAINS in their research.  My basis for asserting that the study is not reproducible is that chemical structures and assay results are not disclosed for the PAINS and neither are the targets for three of the assays used in the analysis.  There are also the questions of why the output from only six HTS campaigns was used in the analysis and how these six were chosen from the 40+ HTS campaigns that had been run.  Given that all six campaigns were directed at protein-protein interactions employing AlphaScreen technology, I would also question the use of the term ‘Pan’ in this context.  It’s also worth remembering that sampling bias is an issue even with large data sets.  For example, one (highly cited) study asserts that pharmacological promiscuity decreases with molecular weight while another (even more highly cited) study asserts that the opposite trend applies.

This is probably a good point for me to state that I’m certainly not saying that PAINS compounds are ‘nice’ in the context of screening (or in any other context).  I’ve not worked up HTS output for a few years now and I can’t say that I miss it.  Generally, I would be wary of any compound whose chemical structure suggested that it would be electrophilic or nucleophilic under assay conditions or that it would absorb strongly in the uv/visible region or have ‘accessible’ redox chemistry. My own experience with problem compounds was that they didn’t usually reveal their nasty sides by hitting large numbers of assays.  For example, the SAR for a series might be ‘flat’ or certain compounds might be observed to hit mechanistically related assays (e.g. cysteine protease and a tyrosine phosphatase).  When analyzing HTS results the problem is not so much deciding that a compound looks ‘funky’ but more in getting hard evidence that allows you to apply the molecular captive bolt with a clear conscience (as opposed to “I didn’t like that compound” or “it was an ugly brute so I put it out of its misery” or “it went off while I was cleaning it”).


This is a good point to talk about rhodanines in a bit more detail and introduce the concept of substructural context which may be unfamiliar to some readers and I'll direct you to the figure above.  Substructural context becomes particularly important if you’re extrapolating bad behavior observed for one or two compounds to all compounds in which a substructure is present. Have a look at the four structures in the figure and think about what they might be saying to you (if they could talk).  Structure 1 is rhodanine itself but a lot of rhodanine derivatives have an exocyclic double bond as is the case for structures 2 to 4.  The rhodanine ring is usually electron-withdrawing which means that a rhodanine with an exocyclic double bond can function as a Michael acceptor and nucleophilic species like thiols can add across the exocyclic double bond.  I pulled structure 3 from the PAINS article and it is also known as WEHI-76490 and I’ve taken the double bond stereochemistry to be as indicated in the article.  Structure 3 has a styryl substituent on the exocyclic double bond which means that it is a diene and has sigmatropic options that are not available to the other structures.  Structure 4, like rhodanine itself, lacks a substituent on the ring nitrogen and this is why I qualified ‘electron-withdrawing’ with ‘usually’ three sentences previously.   I managed to find a pKa of 5.6 for 4 and this means that we’d expect the compound to predominantly deprotonated at neutral pH (bear in mind that some assays are run at low pH).   Any ideas about how deprotonation of a rhodanine like 4 would affect its ability to function as a Michael acceptor?  As an aside, I would still worry about a rhodanine that was likely to deprotonate under assay conditions but that would be going off on a bit of a tangent.


Now is a good time to take a look at how some of the substructural context of rhodanines was captured in the PAINS paper and we need to go into the supplemental information to do this.  Please take a look a the table above.  I’ve reconstituted a couple of rows from the relevant table in the supplemental material that is provided with the PAINS article.  You’ll notice is that there are two rhodanine substructural definitions, only one of which has the exocyclic double bond that would allow it to function as a Michael acceptor.  The first substructure matches the rhodanine definitions for the 2006 BMS screening deck filters although the 2007 Abbott rules for compound reactivity to protein thiols allow the exocylic double bond to be to any atom.  Do you think that the 60 compounds, matching the first substructure, that fail to hit a single assay should be regarded as PAINS?  What about the 39 compounds that hit a single assay?   You’ll also notice that the enrichment (defined as the ratio of the number of compounds hitting two to six assays to the number of compounds hitting no assays) is actually greater for the substructure lacking the exocyclic double bond.   Do you think that it would appropriate to invoke the BMS filters or Abbott rules as additional evidence for bad behavior by compounds in the second class?  As an aside it is worth remembering that forming a covalent bond with a target is a perfectly valid way to modulate its activity although there are some other things that you need to be thinking about

I should point out that the PAINS filters do provide a richer characterization of substructure than what I have summarized here.   If doing HTS, I would certainly (especially if using AlphaScreen) take note if any hits were flagged up as PAINS  but I would not summarily dismiss somebody's work as crap simply on the basis that they were doing assays on compounds that incorporated a rhodanine scaffold.  If I was serious about critiquing a study, I’d look at some of the more specific substructural definitions for rhodanines and try to link these to individual structures in the study.   However, there are limits to how far you can go with this and, depending on the circumstances, there are number of ways that authors of a critiqued study might counter-attack.  If they’d not used AlphaScreen and were not studying protein-protein interactions, they could argue irrelevance on the grounds that the applicability domain of the PAINS analysis is restricted to AlphaScreen being used to study protein-protein interactions.  They could also get the gloves off and state that six screens, the targets for three of which were not disclosed, are not sufficient for this sort of analysis and that the chemical structures of the offending compounds were not provided.  If electing to attack on the grounds that this is the best form of defense, they might also point out that source(s) for the compounds were not disclosed and it is not clear how compounds were stored, how long they spent in DMSO prior to assay and exactly what structure/purity checks were made.

However, I created this defense scenario for a reason and that reason is not that I like rhodanines (I most certainly don’t).   Had it been done differently, the PAINS analysis could have been a much more effective (and heavier) stick with which to beat those who dare to transgress against molecular good taste and decency.   Two things needed to be done to achieve this.  Firstly, using results from a larger number of screens with different screening technologies would have gone a long way to countering applicability domain and sampling bias arguments.  Secondly, disclosing the chemical structures and assay results for the PAINS would make it a lot easier to critique compounds in literature studies since these could be linked by molecular similarity (or even direct match) to the actual ‘assay fingerprints’ without having to worry about the subtleties (and uncertainties) of substructural context. This is what Open Science is about.

So this is probably a good place to leave things.  Even if you don't agree with what I've said,  I hope that this blog post will have at least got you thinking about some things that you might not usually think about. Also have another think about that question I posed earlier. What evidence do you think would be sufficient for you to declare the results of a study to be invalid simply on the basis of a substructure being present in the molecular structure of compound(s) that featured in that study?


Sunday, 11 January 2015

New year, new blog name...

A new year and a new title for the blog which will now just be ‘Molecular Design’.  I have a number of reasons for dropping fragment-based drug discovery (FBDD) from the title but first want to say a bit about molecular design because that may make those reasons clearer.  Molecular design can be defined as control of the behavior of compounds and materials by manipulation of molecular properties.  The use of the word ‘behavior’ in the definition is very deliberate because we design a compound or material to actually do something like bind to the active site of an enzyme or conduct electricity or absorb light of a particular wavelength.   A few years ago, I noted that molecular design can be hypothesis-driven or prediction-driven. In making that observation, I was simply articulating something that many would have already been aware of rather than presenting radical new ideas.  However, I was also making the point that it is important to articulate our assumptions in molecular design and be brutally honest about what we don’t know. 

Hypothesis-driven molecular design (HDMD) can be thought of as a framework in which to establish what, in the interest of generality, I’ll call ‘structure-behavior relationships’ (SBRs) as efficiently as possible.  When we use HDMD, we acknowledge that is not generally possible to predict the behavior of compounds directly from molecular structure in the absence of measurements for structurally related compounds.  There is an analogy between HDMD and statistical molecular design (SMD) in that both can be seen as ways of obtaining the information required for making predictions even though the underlying philosophies may differ somewhat.  The key challenge for both HDMD (and SMD) is identifying the molecular properties that will have the greatest influence on the behavior of compounds and this is challenging because you need to do it without measured data.   An in depth understanding of molecular properties (e.g. conformations, ionization, tautomers, redox potential, metal complexation, uv/vis absorption) is important when doing HDMD because this enables you to pose informative hypotheses.  In essence, HDMD is about asking good questions with informative compounds and relevant measurements and the key challenge is how to make the approach more systematic and objective.   One key molecular property is something that I’ll call ‘interaction potential’ and this is important because the behavior of a compound is determined to a large extent by the interactions of its molecules with the environments (e.g. crystal lattice, buffered aqueous solution) in which they exist.

Since FBDD is being dropped from the blog title, I thought that I’d say a few words about where FBDD fits into the molecular design framework.   I see FBDD as essentially a smart way to do structure-based design in that ligands are assembled from proven molecular recognition elements. The ability to characterize weak binding allows some design hypotheses to be tested without having to synthesize new compounds.  It’s also worth remembering that the FBDD has its origins in computational chemistry ( MCSS | Ludi | HOOK ) and that an approach to crystallographic mapping of protein surfaces was published before the original SAR by NMR article made its appearance.  My own involvement with FBDD began in 1997 and I focused on screening library design right from the start.  The screening library design techniques described in blog posts here ( 1 | 2 | 3 | 4 | 5) and a related journal article have actually been around for almost 20 years although I think they still have some relevance to modern FBDD even if they are getting a bit dated.  If you’re interested, you can find a version of the SMARTS-based filtering filter software that I was using even before Zeneca became AstraZeneca.  It’s called SSProFilter and you can find source code (it was built with the OEChem toolkit) in the supplemental information for our recent article on alkane/water logP.  So why drop FBDD from the blog title? For me, molecular design has always been bigger than fragment-based molecular design and my involvement in FBDD projects has been minimal in recent years.  FBDD is increasingly becoming mainstream and, in any case, dropping FBDD from the blog title certainly doesn’t prevent me from discussing fragment-based topics or even indulging in some screening library design should the opportunities arise.

As some readers will be aware, I have occasionally criticized some of the ways that things get done in drug discovery and so it’s probably a good idea to say something about the directions in which I think pharmaceutical design needs to head.  I wrote a short Perspective for the JCAMD 25th anniversary issue three years ago and this still broadly represents my view where the field should be going.  Firstly we need to acknowledge the state of predictive medicinal chemistry and accept that we will continue to need some measured data for the foreseeable future.  This means that, right now, we need to think more about how collect the most informative data as efficiently as possible and less about predicting pharmacokinetic profiles directly from molecular structure.  Put another way, we need to think about drug discovery in a Design of Experiments framework.  Secondly, we need to look at activity and properties in terms of relationships between structures because it’s often easier to predict differences in the values of a property than it is to predict the values themselves.  Thirdly, we need to at least consider alternatives to octanol/water for partition coefficient measurement.

There are also directions in which I think we should not be going.  Drug discovery scientists will be aware of an ever-expanding body of rules, guidelines and metrics (RGMs) that prompts analogy with The Second Law.  Some of this can be traced to the success of the Rule of 5 and many have learned that is a lot easier to discover metrics than it is to discover drugs.  If you question a rule, the standard response will be, “it’s not a rule, it’ a guideline” and your counter-response should be, “you’re the one who called it a rule”. Should you challenge the quantitative basis of a metric, which by definition is supposed to measure something, it is likely that you will be told how useful it is.  This defense is easily outflanked by devising a slightly different metric and asking whether it would be more or less useful.   Another pattern that many drug discovery scientists will have recognized is that, in the RGM business, simplicity trumps relevance. 

Let’s talk about guidelines for a bit.  Drug discovery guidelines need to be based on observations of reality and that usually means trends in data.  If you’re using guidelines, then you need to know the strength of the trend(s) on which the guidelines are based because this tells you how rigidly you should adhere to the guidelines.  Conversely, if you’re recommending that people use the guidelines that you’re touting then it’s very naughty to make the relevant trends appear to be stronger than they actually are.   There are no winners when data gets cooked and I think this is a good point at which to conclude the post.

So thanks for reading and I’ll try whet your appetite by saying that the next blog post is going to be on PAINS. Happy new year!