This post got prompted by a survey of screening library size by Teddy at Practical Fragments. I commented briefly there but it soon became clear that it was going to take a really long comment to say everthing. While at AstraZeneca, I was involved in putting together a 20k screening library for fragment-based work and I thought that some people might be interested in reading about some of the experiences. The project started at the beginning of 2005 although the library (GFSL2005) was not assembled until the folllowing year. Some aspects of the library design were also described in our contribution to the JCAMD special issue on FBDD. The NMR screening libraries with which I've been involved have been a lot smaller (~2k) than this so why would one want to assemble a fragment screening library of this size? One reason was diversity of fragment-based activities, which included a high concentration component when running HTS In screening, you soon learn the importance of logistics and, in particular, compound managment. If you're going to assemble a screening library that can be easily used, this usually means getting the compounds into dimethyl sulfoxide (DMSO) so that samples can be dispensed automatically. Fragments will usually be screened at high concentration which means that stock solutions also need to be more concentrated (we used 100µM) because most assays don't particularly like DMSO. Another reason for maintaining stock solutions is that there are minimum volume limits for dissolving solids in DMSO so starting from solids each time you run a screen is wasteful of material as well as time. The essence of the GFSL2005 project was ensuring that there were enough high concentration stock solutions in the liquid store to support diverse fragment-based activities on different Discovery sites. While the entire library might be screened as a component of HTS, individual groups could screen subsets of GFSL05 that were significantly smaller (at least when I left AZ in 2009). It's important to rememember that while GFSL05 was a generic library we were also trying to anticipate the needs of people who might be doing directed fragment screening. The compounds were selected using Core and Layer (which I've discussed before). This approach is not specific to fragment screening libraries and I've used it to put together a compound library for phenotypic screening. The in house software, some of which had been developed in response to emergence of HTS, was actually in place by the beginning of 1996. Here (from one of my presentations) is a slide that illustrates the approach.
One of the comments on the Practical Fragments post was about molecular diversity and the anonymous commentator asked:
"Are some companies more keen because of their bigger libraries or just lazy and adding more and more compounds without looking at diversity?"
This is a good challenge and I'll try to give you an idea of what were trying to do. Core and Layer is all about diversity in that the approach aims to drive compound selection away (in terms of molecular similarity) from what has already been selected. However, when designing the library we did try to ensure that as many compounds as possible had a near neighbour (within the library). Here's a slide showing (in pie chart format) the fractions of the library corresponding to each number of neighbours. There are actually three pie charts because counting neighbours depends on the similarity threshold that you use and I've done the analysis for three different thresholds.
You can do a similar analysis to ask about neighbours of library compounds that are available. This is of interest because you'll generally want to be able to follow up hits with analogues in a process that some call 'SAR-by-Catalogue' although I regard the term as silly and refuse to use it. Availability is not constant and the analysis shown in the following slide was a snapshot generated for a 2008. You'll notice that there is an extra row of pie charts since one can define availability more (> 20mg) or less (>10 mg) conservatively. If you require plenty of sample for availability and require a neighbours to be very similar then you'll have less neighbours.
There was also some commentary on solubility and ideally this should be measured in assay bufffer for library compounds. We used of our in house high throughput solubility assay when putting GFSL2005 together. This solubility assay had original been designed for assessing hits from conventional HTS and had a limited dynamic range (1-100 µM) which was not ideal for fragments. The assay used standard (i.e. 10mM) stock solutions which meant that we could only do measurements for samples that were in this format (library compounds acquired from external sources were only made up as 100mM stocks). Nevertheless, we used the assay since we believed that the information would still be valuable. The graphic below illustrates the relatonship between solubility and lipophilicity (ClogP) for compounds that are neutral under assay conditions. We used ClogP to bin the data which allowed us to make use of out of range data by plotting percentiles. This allowed us to assess the risk of poor solubily as a function of ClogP. It's worth pointing out that the binning was done so as to be able to include in range and out of range data in a single analysis. We just showed the lowest percentiles because we were only interested in the least soluble compounds. Even so, you should be able to get an idea of the variation in solubility for each bin.
Staying with the solubility theme I'll finish off taking a look at with a look at the aromatic rings and carbon saturation as determinants of aqueous solubility. The two articles at which I'll be looking were both featured in a post at Practical Fragments and, given that the analyses in the two articles were not exactly rose-scented, the title of the post struck me as somewhat ironic.
I'll start with Escape from Flatland which introduces carbon bond saturation, defined by fraction sp3 (Fsp3) as a molecular descriptor. Figure 5 is the one most relevant to the discussion and this is captioned, "Fsp3 as a function of logS..." although the caption is not totally accurate. LogS is binned and it is actually average Fsp3 that is plotted and not Fsp3 itself. I'm guessing that the average is the mean (rather than the median) Fsp3 for each logS bin. If you look at the plot there appears to be a good correlation between the mid-point of each logS bin and average Fsp3 although it would have been helpful if they'd presented a correlation coefficient and shown a standard deviation for each bin. In fact, it would have been helpful if they'd shown some standard deviations for the other figures as well but that is peripheral to this discussion. The problem with Figure 5 is that the intra-bin variation in Fsp3 is hidden (i.e. no error bars) and without being able to see this variation it is very difficult to know how strongly Fsp3 (as opposed to its average value) is correlated with logS. Those readers who were awake (it was immediately after lunch) at my RACI talk last December will know what I'm talking about but hopefully some of the rest of you will at least be wondering why the authors didn't simply fit logS to Fsp3. Anyone care to speculate as to what the correlation coefficient between logS and Fsp3 might be?
Impact of aromatic ring count presents a box plot of solubility as a function of number of aromatic rings. At least the variation in solubility is shown for each value of aromatic ring count is shown (even though I'd have preferred to see log of solubility plotted). The authors also looked at the correlation between number of aromatic rings cLogP (which I prefer to call ClogP) and judged it to be excellent (although it is not clear whether they bothered to calculate a correlation coefficient to support their assertion of excellence). Correlations between descriptors are important because the effect of one can be largely due to the extent to which is correlated with another. Although there are ways that you can model the dependence of a quantity on two descriptors that are correlated with each other, the authors chose to do this graphically using the pie chart array in Figure 6. If you look at the pie chart array, you can sort of convince yourself that aromatic ring count has an effect on solubility that is not just due to its effect on ClogP. However, there is established statistical methodology for dealing with this sort or problem. I couldn't help wondering why the authors didn't use this to analyse their data. What puzzled me even more was that they didn't seem to have considered the possiblity that the number of aromatic rings might be correlated with molecular weight (Dan also picked up on this in his post) since I'd guess that this correlation might be even stronger than the one with ClogP.
I do believe that a strong case can be made for looking beyond aromatic rings when putting screening libraries and I'll point you to a recent intiative that may be of interest. However, this case is based on considerations of molecular recognition and molecular diversity. I don't believe that either of these studies (which both imply that substituting cyclohexadiene for benzene would be a good thing) strengthens that case. Possibly if the analysis had been done differently I might have arrived at a different conclusion but a lack of error bars and a failure to acccount for the effect of molecular weight leave me with too many doubts.
Literature cited
Blomberg et al, Design of compound libraries for fragment screening. JCAMD 2009, 23 513-525 DOI
Colclough et al, High throughput solubility determination with application to selection of compounds for fragment screening. Bioorg. Med. Chem. 2008, 16, 6611-6616 DOI
Lovering, Bikker & Humblet, Escape from Flatland: Increasing Saturation as an Approach to Improving Clinical Success. J. Med. Chem. 2009, 52, 6752-6756 DOI
Ritchie & MacDonald, The impact of aromatic ring count on compound developability: Are too many aromatic rings a liability in drug design? Drug Discov. Today 2009, 14, 1011-1020 DOI