Sunday 28 March 2010

FBDD and Networking

Reading an account of the session at the ACS on application of computational methods to FBDD, reminded me that it would be a good time to raise awareness of networking groups in this area. Both this blog and Practical Fragments allow readers to comment on posts although this tends not to happen with the frequency that it does at In the Pipeline, probably reflecting the huge readership, frequent updating and diverse content of what I consider to be the best drug discovery blog by a long way.

People interested in FBDD may already belong to a number of relevant LinkedIn groups. The groups offer some advantages over blogs for getting discussions going in that anyone can start a discussion and group members get alerted by email whenever somebody makes a new comment. I’ll list some of these below in case there are some that you’ve not yet heard about.

Fragment Based Drug Discovery (This group is linked by both FBDD blogs)

Label Free Assay Technology Group (It is the assay that makes FBDD possible. The weaker the binding that you can measure reliably, the more powerful your assay)

Structural Biology (X-ray Crystallography, NMR Spectroscopy, Electron Microscopy) (Generally you’re going to need crystal structures to take fragment hits forward)

Job opportunities in Computational Chemistry and Biology, Xray Crystallography, Fragment Based DD

Recently, I submitted the same item for discussion at a number of LinkedIn groups. I invited group members to share their views on the most appropriate technologies for detecting fragment binding. I learned about some new ways to configure SPR experiments and the use of Tm-shift assays. Most of the discussion was in the Structural Biology group (see discussion) although there was helpful input from the relatively new Label Free Assay Technology Group (see discussion) so thank you to all the participants. It was also great to see a couple of familiar faces from my days in Big Pharma, including a co-author from an article that a number of us wrote back in 2007

Saturday 13 March 2010

Interference, PAIN and cysteine pathologies

Dan provided some useful comments on the last post and I think it’s better to respond with a post since this makes everything more visible the readers of both our blogs. I agree with Dan’s point that there are pitfalls, such as compound aggregation, in addition to interference that Adam and colleagues describe in their article. In an ideal situation one would always have the ability to measure weak affinity directly. Protein-detect NMR is one of my personal favorites but you do need labelled protein and, if you want to get full value for your money (labelled protein is not cheap), you’ll also need resonance assignments. The SPR technology is widely applicable and like the protein-detect NMR will provide a direct measurement of affinity (and a whole bunch of other stuff). Isothermal titration calorimetry (ITC) represents another option although I believe that the technique is relatively sample-hungry and more limited than the other two techniques in the weakness of binding that can be measured. Also you do need heat so to speak even though the experiment is isothermal.

Nevertheless you can get to the point of having crystal structures with bound fragments using only a biochemical assay to measure potency. Given that you may well be screening at concentrations one or two orders of magnitude above what is ‘normal’ in HTS, it does make sense to use the approach that Adam and colleagues describe even if you’re going to follow up with SPR or NMR. I do sometimes wonder if the promiscuous behaviour of some inhibitors is due to this sort of interference rather than aggregation. One intriguing question is whether aggregates can ‘inhibit’ by changing spectroscopic and fluorimetric properties of assay mixtures rather than by interacting with proteins. At least there’s usually the option of running assays with added detergent to check for aggregation.

I won’t say much right now about the structural nasties that Jonathan Baell and Georgina Holloway have identified as PAINS since I’ll be visiting Jonathan at WEHI next Friday. I became acquainted with some of these unsavory structural types during my time in Big Pharma and do not believe that their PAINfulness is specific to the AlphaScreen technology that the WEHI researchers are using. Back in those days we had the Decrapper and a program called Flush...

Dan mentioned the Practical Fragments post on a Cruzain Screen so I thought I’d finish with a couple of papers that show how things can get unstuck when you’ve got a catalytic cysteine with a malicious streak. In the dock is none other than PTP1B, a target that is much-loved by disease area strategists and much-hated by screening groups. I’m not going to review the articles or even comment on them right now. Just read them in the correct order and perhaps we can pick up this theme later.

PTP1B: Read this first

PTP1B: Read this second

Literature cited

Baell & Holloway, New Substructure Filters for Removal of Pan Assay Interference Compounds (PAINS) from Screening Libraries and for Their Exclusion in Bioassays. J. Med. Chem. 2010, ASAP | DOI

Liljebris et al, Synthesis and biological activity of a novel class of pyridazine analogues as non-competitive reversible inhibitors of protein tyrosine phosphatase 1B (PTP1B). Bioorg. Med. Chem. 2002, 10, 3197-3122 | DOI

Tjernberg et al, Mechanism of action of pyridazine analogues on protein tyrosine phosphatase 1B (PTP1B). Bioorg. Med. Chem. Lett. 2004, 14, 891-897 | DOI

Sunday 7 March 2010

Interference correction in biochemical assays

Surface Plasmon Resonance (SPR) was in focus recently both here and across at Practical Fragments. However, now I’d like to take a look at using biochemical assays to identifying fragments that bind to targets of interest. Biochemical screens can typically be run in high throughput and are compatible with automation for high throughput screening, which makes it easy to do follow up screening with analogs. Furthermore the hits identified by biochemical assay are actually inhibiting rather than just binding. A criticism of biochemical screens is that they measure binding indirectly and are prone to interference. Sometimes they are used as a pre-screen to reduce the number of compounds that need to be evaluated in a lower throughput biophysical assay. However there are things that you can do to make your biochemical assay more reliable and meaningful. And maybe even more fun.

The article that I’ve chosen to take a look at in this post is by Adam Shapiro and some other colleagues from my days in Big Pharma. Before I met these folk, most of my fragment work had been around libraries for NMR screening and I learned from them how it is possible to correct for some of the interference from test samples in biochemical assays.

Inhibition is typically detected in a biochemical assay by quantifying changes in light absorption, fluorescence or luminescence. In high throughput applications ‘assay components are added serially to wells without any filtration or washing steps’ which means ‘that the test sample remains in the well during the optical measurement and can interfere with it’. This means that compounds that absorb in the UV or visible range and that fluoresce or quench fluorescence can all lead to changes in the readout parameter without actually binding to the target protein. Other less obvious causes of interference include insolubility of test compound (turbidity can lead to detection of highly polarised scattered light) and meniscus deepening which decreases path length. Compounds are typically assayed at relatively high concentrations in fragment screening, making it especially important to recognise and account for assay interference in these applications.

In addition to providing a useful discussion on the causes of interference, the article describes a practical approach to correcting for it by running ‘artefact assays’. These involve running additional plates in which wells contain the same test samples but no target protein. The wells in the artefact assay plate also need to contain whatever is responsible for generating the signal (e.g. reaction product) and a baseline can defined by preparing wells without test samples. The authors describe in some detail how they apply the corrections and since this is only a summary of the article, I’ll leave it to you to go and check their article out. However, I would like to conclude by noting that the authors also suggest criteria for deciding to reject data because interference is too great for meaningful correction.

Literature Cited

Shapiro, Walkup and Keating Correction for Interference by Test Samples in High-Throughput Assays. J. Biomol. Screen. 2009, 14, 1008-1016 | DOI

Surface Plasmon Resonance

General Reviews

Rich & Myszka, Grading the commercial optical biosensor literature – Class of 2008: ‘The Mighty Binders’ J. Mol. Recognit. 2010, 23, 1-64 Link | Review

Application to Fragment Screening

Giannetti, From experimental design to validated hits: A comprehensive walk-through of fragment lead identification using surface plasmon resonance. Methods Enzymol. 2012, 493, 169-218. DOI

Perspicace et al, Fragment-Based Screening Using Surface Plasmon Resonance Technology, J. Biomol. Screen. 2009, 14, 337-349 DOI | Review

Binding Pathologies

Giannetti et al, Surface Plasmon Resonance Based Assay for the Detection and Characterization of Promiscuous Inhibitors, J. Med. Chem. 2008, 51, 574-580 DOI | Review