Sunday, 7 March 2010

Interference correction in biochemical assays

Surface Plasmon Resonance (SPR) was in focus recently both here and across at Practical Fragments. However, now I’d like to take a look at using biochemical assays to identifying fragments that bind to targets of interest. Biochemical screens can typically be run in high throughput and are compatible with automation for high throughput screening, which makes it easy to do follow up screening with analogs. Furthermore the hits identified by biochemical assay are actually inhibiting rather than just binding. A criticism of biochemical screens is that they measure binding indirectly and are prone to interference. Sometimes they are used as a pre-screen to reduce the number of compounds that need to be evaluated in a lower throughput biophysical assay. However there are things that you can do to make your biochemical assay more reliable and meaningful. And maybe even more fun.

The article that I’ve chosen to take a look at in this post is by Adam Shapiro and some other colleagues from my days in Big Pharma. Before I met these folk, most of my fragment work had been around libraries for NMR screening and I learned from them how it is possible to correct for some of the interference from test samples in biochemical assays.

Inhibition is typically detected in a biochemical assay by quantifying changes in light absorption, fluorescence or luminescence. In high throughput applications ‘assay components are added serially to wells without any filtration or washing steps’ which means ‘that the test sample remains in the well during the optical measurement and can interfere with it’. This means that compounds that absorb in the UV or visible range and that fluoresce or quench fluorescence can all lead to changes in the readout parameter without actually binding to the target protein. Other less obvious causes of interference include insolubility of test compound (turbidity can lead to detection of highly polarised scattered light) and meniscus deepening which decreases path length. Compounds are typically assayed at relatively high concentrations in fragment screening, making it especially important to recognise and account for assay interference in these applications.

In addition to providing a useful discussion on the causes of interference, the article describes a practical approach to correcting for it by running ‘artefact assays’. These involve running additional plates in which wells contain the same test samples but no target protein. The wells in the artefact assay plate also need to contain whatever is responsible for generating the signal (e.g. reaction product) and a baseline can defined by preparing wells without test samples. The authors describe in some detail how they apply the corrections and since this is only a summary of the article, I’ll leave it to you to go and check their article out. However, I would like to conclude by noting that the authors also suggest criteria for deciding to reject data because interference is too great for meaningful correction.

Literature Cited

Shapiro, Walkup and Keating Correction for Interference by Test Samples in High-Throughput Assays. J. Biomol. Screen. 2009, 14, 1008-1016 | DOI


Dan Erlanson said...

I agree that biochemical assays can be useful, but there pitfalls in addition to simple interference, one of the biggest of which is compound aggregation. This can even trip up people who are acutely aware of the problem. Brian Shoichet and Adam Renslo published an entertaining paper last year in which they spent some time optimizing a series of cruzain inhibitors only to discover that they had 10-fold less detergent in their assay than they thought. The compounds even had interpretable SAR, but it turned out they had been optimized for aggregation, not binding to cruzain (see summary at Fortunately they discovered the error and their mishap makes a useful lesson, but many other papers likely report artifacts, not inhibitors.

There are also plenty of compounds that don’t appear nasty at first glance but light up in almost every assay; Jonathan Baell and Georgina Holloway call these PAINs (pan-assay interference compounds). They recently published a paper with extensive substructure filters to help people recognize them ( For example, there are dozens of papers (including some fragment-based) describing rhodanines as inhibitors for various targets, but the majority of these are probably not useful.

As more people move into screening, particularly folks without much prior experience in this area, it is important that they are aware of these problems. Baell describes PAINs as “polluting the literature”, which I think is accurate: countless hours are spent following up on artifacts. Before we publish, we all need to be careful we aren’t filling journals with rubbish that other researchers will need to clean up!

Paul said...

I liked Jonathan's paper on PAINs and agree with the sentiment, but we should also consider the possibility of false false positives. That is, compounds that interfere with light absorption, fluorescence or luminescence based assays,or are self-aggregators, but nevertheless are also active compounds in their own right. Whole classes of potentially useful drug leads may be discarded based on dogma surrounding compounds we just "know are not any good". The PAIN substructure filters may be useful to query your compound library for awareness but we should always be wary of letting rule-based approaches to drug discovery become too rigid.

As an example, in the natural products area people routinely discard any polyphenolic compounds because they are known promiscuous binders. But we also know, or should remember, that there are large numbers of bioactive polyphenols most commonly antioxidants.

Stéphane Quideau wrote a nice essay on this subject (here: the last bit I'll quote below...

Besides this general mode of action based on the chemical reactivity inherent to the phenol function, simple phenols and certain polyphenols can also physically and specifically interact with biomolecules, including therapeutically significant enzymes. In this context, it is worth recalling that phenols are amphiphilic entities capable of mimicking the physico-chemical behavior of polar aromatic amino acids such as tyrosine, which is often found as a key residue in functional proteins. Such considerations raise some intriguing and fascinating questions. Why did nature install a hydroxyl group on the amino acid phenylalanine? Why did plants choose to produce phenolic secondary metabolites? As a chemical defence against pathogens and herbivores? Probably so, among many other beneficial properties that plants can exploit from their (poly)phenolic metabolites for their growth, reproduction, resistance and protection against their environment. It is then somewhat surprising to realize that plant polyphenols have been mostly left out of medicinal drug developments. The reasons of this relative disapproval of polyphenols by the pharmaceutical industry are unclear, but medicinal chemists might have been influenced by the earlier considerations of plant “tanning” polyphenols as structurally rather undefined and water-soluble oligomers only capable of forming complexes with alkaloids and proteins and of chelating metallic ions in non-specific manners. These considerations might be indeed justified for some polyphenols such as the gallotannins causing the precipitation of collagen during tanning processes, or the proanthocyanidins interacting with salivary proteins to exhibit the perception of astringency upon tasting red wines. Hence, standard industrial extraction protocols of plant secondary metabolites usually involve a step to ensure the complete passage of all polyphenolic compounds in aqueous extracts in order to avoid “false-positive” results in screening against a given biomolecular target. Some of us are of the opinion that a closer look at these put-aside aqueous extracts should be worth it, for they may contain some polyphenolic “magic bullets”, or at least interesting lead compounds for pharmaceutical drug developments.

Pete said...

In response to Paul's comments, I'll mention one concern that medicinal chemists have about phenols (even monophenols) is that they tend to be prone to metabolism (e.g. glucuronidation) and this can make it difficult to achieve the sustained free blood levels required for oral dosing.

sfthrgrhhh said...

Compound aggregation is indeed a tricky issue in biochemical assays. Fortunately, follow-up biophysical assay methods such as SPR and BLI, which look at direct binding of these compounds to their targets can weed them out - if you look for them. SPR and BLI report signal that reflects the size of the analyte, so even small aggregates that show dose-dependent binding can be spotted because of the larger than expected signal they will show in the biophysical assay.

Pete said...

I mentioned 'Giannetti et al, Surface Plasmon Resonance Based Assay for the Detection and Characterization of Promiscuous Inhibitors, J. Med. Chem. 2008, 51, 574-580' in the Feb 11 post on SPR. I may do a more detailed review of that paper because SPR has a lot of advantages for fragment screening. You can also use the ability to measure kinetics to indentify pathological binders.

Daniel Maturana said...

I Peter,

I know this post is a little bit old by I found it surfing by the web (beautiful things about internet). I just want to mention a new technology that I met when I was doing my thesis and were I'm working now, it si called MicroScale Thermophoresis (MST). It allows you to measure dissociation constant very fast with low sample consumption but the best thing is that you can see directly if the target protein is aggregated. I leave you a link here to check one of our application notes made with Alexey Rak at SANOFI with fragment screening comparing SPR, DSF and MST.