Sunday 31 December 2023

Chemical con artists foil drug discovery

One piece of general advice that I offer to fellow scientists is to not let the fact that an article has been published in Nature (or any other ‘elite’ journal for that matter) cause you to switch off your critical thinking skills while reading it and the BW2014 article (Chemistry: Chemical con artists foil drug discovery) that I’ll be reviewing in this post is an excellent case in point. My main criticism of BW2014 that is that the rhetoric is not supported by data and I’ve always seen the article as something of a propaganda piece.

One observation that I’ll make before starting my review of BW2014 is that what lawyers would call ‘standard of proof’ varies according to whether you’re saying something good about a compound or something bad. For example, I would expect a competent peer reviewer to insist on measured IC50 values if I had described compounds as inhibitors of an enzyme in a manuscript. However, it appears to be acceptable, even in top journals, to describe compounds as PAINS without having to provide any experimental evidence that they actually exhibit some type of nuisance behavior (let alone pan-assay interference). I see a tendency in the ‘compound quality’ field for opinions to be stated as facts and reading some of the relevant literature leaves me with the impression that some in the field have lost the ability to distinguish what they know from what they believe. 

BW2014 has been heavily cited in the drug discovery literature (it was cited as the first reference in the ACS assay interference editorial which I reviewed in K2017) despite providing little in the way of practical advice for dealing with nuisance behavior. B2014 appears to exert a particularly strong influence on the Chemical Probes Community having been cited by the A2015, BW2017, AW2022 and A2022 articles as well as in the Toxicophores and PAINS Alerts section of the Chemical Probes Portal. Given the commitment of the Chemical Probes Community to open science, their enthusiasm for the PAINS substructure model introduced in BH2010 (New Substructure Filters for Removal of Pan Assay Interference Compounds (PAINS) from Screening Libraries and for Their Exclusion in Bioassays) is somewhat perplexing since neither the assay data nor the associated chemical structures were disclosed. My advice to the Chemical Probes Community is to let go of PAINS filters. 

Before discussing BW2014, I’ll say a bit about high-throughput screening (HTS) which emerged three decades ago as a lead discovery paradigm. From the early days of HTS it was clear, at least to those who were analyzing the output from the screens, that not every hit smelt of roses.  Here’s what I wrote in K2017

Although poor physicochemical properties were partially blamed (3) for the unattractive nature and promiscuous behavior of many HTS hits, it was also recognized that some of the problems were likely to be due to the presence of particular substructures in the molecular structures of offending compounds. In particular, medicinal chemists working up HTS results became wary of compounds whose molecular structures suggested reactivity, instability, accessible redox chemistry or strong absorption in the visible spectrum as well as solutions that were brightly colored. While it has always been relatively easy to opine that a molecular structure ‘looks ugly’, it is much more difficult to demonstrate that a compound is actually behaving badly in an assay.

It has long been recognized that it is prudent to treat frequent-hitters (compounds that hit in multiple assays) with caution when analysing HTS output. In K2017 I discussed two general types of behavior that can cause compounds to hit in multiple assays: Type 1 (assay result gives an incorrect indication of the extent to which the compound affects target function) and Type 2 (compound acts on target by undesirable mechanism of action (MoA)). Type 1 behavior is typically the result of interference with the assay read-out and the hits in question can be accurately described as ‘false positives’ because the effects on the target are not real. Type 1 behaviour should be regarded as a problem with the assay (rather than with the compound) and, provided that the activity of a compound has been established using a read-out for which interference is not a problem, interference with other read-outs is irrelevant. In contrast, Type 2 behavior should be regarded as a problem with the compound (rather than with the assay) and an undesirable MoA should always be a show-stopper.

Interference with read-out and undesirable MoAs can both cause compounds to hit in multiple assays. However, these two types of bad behavior can still cause big problems whether or not the compounds are observed to be frequent-hitters. Interference with read-out and undesirable MoAs are very different problems in drug discovery and the failure to recognize this point is a serious deficiency that is shared by BW2014 and BH2010.

Although I’ve criticized the use of PAINS filters there is no suggestion that compounds matching PAINS substructures are necessarily benign (many of the PAINS substructures look distinctly unwholesome to me). I have no problem whatsoever with people expressing opinions as to the suitability of compounds for screening provided that the opinions are not presented as facts. In my view the chemical con-artistry of PAINS filters is not that benign compounds have been denounced but the implication that PAINS filters are based on relevant experimental data.

Given that the PAINS filters form the basis of a cheminformatic model that is touted for prediction of pan-assay interference, one could be forgiven for thinking that the model had been trained using experimental observations of pan-assay interference. This is not so, however, and the data that form the basis of the PAINS filter model actually consist of the output of six assays that each use the AlphaScreen read-out. As noted in K2017, a panel of six assays using the same read-out would appear to be a suboptimal design of an experiment to observe pan assay interference. Putting this in perspective, P2006 (An Empirical Process for the Design of High-Throughput Screening Deck Filters) which was based on analysis of the output from 362 assays had actually been published four years before BH2010.

After a bit of a preamble, I need to get back to reviewing BW2014 and my view is that readers of the article who didn’t know better could easily conclude that drug discovery scientists were completely unaware of the problems associated with misleading HTS assay results before the re-branding of frequent-hittters as PAINS in BH2010. Given that M2003 had been published over a decade previously. I was rather surprised that BW2014 had not cited a single article about how colloidal aggregation can foil drug discovery. Furthermore, it had been known (see FS2006) for years before the publication of BH2010 that the importance of colloidal aggregation could be assessed by running assays in the presence of detergent.

I'll be commenting directly on the text of BW2014 for the remainder of the post (my comments are italicized in red).

Most PAINS function as reactive chemicals rather than discriminating drugs. [It is unclear here whether “PAINS” refers to compounds that have been shown by experiment to exhibit pan-assay interference or simply compounds that share structural features with compounds (chemical structures not disclosed) claimed to be frequent-hitters in the BH2010 assay panel. In any case, sweeping generalizations like this do need to be backed with evidence. I do not consider it valid to present observations of frequent-hitter behavior as evidence that compounds are functioning as reactive chemicals in assays.] They give false readouts in a variety of ways. Some are fluorescent or strongly coloured. In certain assays, they give a positive signal even when no protein is present. [The BW2014 authors appear to be confusing physical phenomena such as fluorescence with chemical reactivity.]

Some of the compounds that should ring the most warning bells are toxoflavin and polyhydroxylated natural phytochemicals such as curcumin, EGCG (epigallocatechin gallate), genistein and resveratrol. These, their analogues and similar natural products persist in being followed up as drug leads and used as ‘positive’ controls even though their promiscuous actions are well-documented (8,9). [Toxoflavin is not mentioned in either Ref8 or Ref9 although T2004 would have been a relevant reference for this compound. Ref8 only discusses curcumin and I do not consider that the article documents the promiscuous actions of this compound.  Proper documentation of the promiscuity of a compound would require details of the targets that were hit, the targets that were not hit and the concentration(s) at which the compound was assayed. The effects of curcumin, EGCG (epigallocatechin gallate), genistein and resveratrol on four membrane proteins were reported in Ref9 and these effects would raise doubts about activity for any of these compounds (or their close structural analogs) that had been observed in a cell-based assay. However, I don’t consider that it would be valid to use the results given in Ref9 to cast doubt on biological activity measured in an assay that was not cell-based.] 

Rhodanines exemplify the extent of the problem. [Rhodanines are specifically discussed in K2017 in which I suggest that the most plausible explanation for the frequent-hitter behavior observed for rhodanines in the BH2010 panel of six AlphaScreen assays is that the singly-connected sulfur reacts with singlet oxygen (this reactivity has been reported for compounds with thiocarbonyl groups in their molecular structures).] A literature search reveals 2,132 rhodanines reported as having biological activity in 410 papers, from some 290 organizations of which only 24 are commercial companies. [Consider what the literature search would have revealed if the target substructure had been ‘benzene ring’ rather than ‘rhodanine’? As discussed in this post the B2023 study presented the diversity of targets hit by compounds incorporating a fused tetrahydroquinolines in their molecular structures as ‘evidence’ for pan-assay interference by compounds based on this scaffold.] The academic publications generally paint rhodanines as promising for therapeutic development. In a rare example of good practice, one of these publications (10) (by the drug company Bristol-Myers Squibb) warns researchers that these types of compound undergo light-induced reactions that irreversibly modify proteins. [The C2001 study (Photochemically enhanced binding of small molecules to the tumor necrosis factor receptor-1 inhibits the binding of TNF-α) is actually a more relevant reference since it focuses of the nature of the photochemically enhanced binding. The structure of the complex of TNFRc1 with one of the compounds studied (IV703; see graphic below) showed a covalent bond between one of carbon atoms of the pendant nitrophenyl and the backbone amide nitrogen of A62. The structure of the IV703–TNFRc1 complex shows that a covalent bond between pendant aromatic ring must also be considered as a distinct possiblity for the rhodanines reported in Ref10 and C2001.] It is hard to imagine how such a mechanism could be optimized to produce a drug or tool. Yet this paper is almost never cited by publications that assume that rhodanines are behaving in a drug-like manner. [It would be prudent to cite M2012 (Privileged Scaffolds or Promiscuous Binders: A Comparative Study on Rhodanines and Related Heterocycles in Medicinal Chemistry) if denouncing fellow drug discovery scientists for failure to cite Ref10.]

In a move partially implemented to help editors and manuscript reviewers to rid the literature of PAINS (among other things), the Journal of Medicinal Chemistry encourages the inclusion of computer-readable molecular structures in the supporting information of submitted manuscripts, easing the use of automated filters to identify compounds’ liabilities. [I would be extremely surprised if ridding the literature of PAINS was considered by the JMC Editors when they decided to implement a requirement that authors include computer-readable molecular structures in the supporting information of submitted manuscripts. In any case, claims such as this do need to be supported by evidence.]  We encourage other journals to do the same. We also suggest that authors who have reported PAINS as potential tool compounds follow up their original reports with studies confirming the subversive action of these molecules. [I’ve always found this statement bizarre since the BW2014 authors appear to be suggesting that that authors who have reported PAINS as potential tool compounds should confirm something that they have not observed and which may not even have occurred. When using the term “PAINS” do the BW2014 authors mean compounds that have actually been shown to exhibit pan-assay interference or compounds that that share structural features with compounds that were claimed to exhibit frequent-hitter behavior in the BH2010 assay panel? Would interference in with the AlphaScreen read-out by a singlet oxygen quencher be regarded as a subversive action by a molecule in situations when a read-out other than AlphaScreen had been used?] Labelling these compounds clearly should decrease futile attempts to optimize them and discourage chemical vendors from selling them to biologists as valid tools. [The real problem here is compounds being sold as tools in the absence of the measured data that is needed to support the use of the compounds for this purpose. Matches with PAINS substructures would not rule out the use of a compound as a tool if the appropriate package of measured data is available. In contrast, a compound that does not match any PAINS substructures cannot be regarded as an acceptable tool if the appropriate package of measured data is not available. Put more bluntly, you’re hardly going to be able to generate the package of measured data if the compound is as bad as PAINS filter advocates say it is.]

Box: PAINS-proof drug discovery

Check the literature. [It’s always a good idea to check the literature but the failure of the BW2014 authors to cite a single colloidal aggregation article such as M2003 suggests that perhaps they should be following this advice rather than giving it. My view is that the literature on scavenging and quenching of singlet oxygen was treated in a cursory manner in BH2010 (see earlier comment in connection with rhodanines).]  Search by both chemical similarity and substructure to see if a hit interacts with unrelated proteins or has been implicated in non-drug-like mechanisms. [Chemical similarity and substructure search will identify analogs of hits and it is actually the exact match structural search that you need do in order to see if a particular compound is a hit in assays against unrelated proteins.] Online services such as SciFinder, Reaxys, BadApple or PubChem can assist in the check for compounds (or classes of compound) that are notorious for interfering with assays. [I generally recommend ChEMBL as a source of bioactivity data.]  

Assess assays. For each hit, conduct at least one assay that detects activity with a different readout. [This will only detect problems associated with interference with read-out. As discussed in S2009 it may be possible to assess and even correct for interference with read-out without having to run an assay with a different read-out.]  Be wary of compounds that do not show activity in both assays. If possible, assess binding directly, with a technique such as surface plasmon resonance. [SPR can also provide information about MoA since association, dissociation and stoichiometry can all be observed directly using this detection technology.] 

That concludes blogging for 2023 and many thanks to anybody who has read any of the posts this year. For too many people Planet Earth is not a very nice place to be right now and my new year wish is for a kinder, happier and more peaceful world in 2024. 

Tuesday 19 December 2023

On quality criteria for covalent and degrader probes

I’ll be taking a look at H2023 (Expanding Chemical Probe Space: Quality Criteria for Covalent and Degrader Probes) in this post and this article has also been discussed In The Pipeline. I’ll primarily be discussing the quality criteria for covalent probes in this post although I’ll also comment briefly on chemical matter criteria proposed for degrader probes. The post is intended as a contribution to the important scientific discussion that the H2023 Perspective is intended to jumpstart:

We are convinced that now is the time to initiate similar efforts to achieve a consensus about quality criteria for covalently acting and degrader probes. This Perspective is intended to jumpstart this important scientific discussion.

Covalent bond formation between ligands and targets is a drug design tactic for exploiting molecular recognition elements in targets that are difficult to make beneficial contacts with.  Cysteine SH has minimal capacity to form hydrogen bonds with polar ligand atoms and the exposed nature of catalytic cysteine SH reduces its potential to make beneficial contacts with non-polar ligand atoms. One common misconception in drug discovery is that covalent bond formation between targets and ligands is necessarily irreversible and it wasn’t clear from my reading of H2023 whether the authors were aware that covalent bond formation between targets and ligands can also be reversible. In any case, it needed to be made clear that the quality criteria proposed by the authors for covalently acting small-molecule probes only apply to probes that act irreversibly.

Irreversible covalent bond formation is typically used to target non-catalytic residues and design is lot more complicated than for reversible covalent bond formation. First, IC50 values are time-dependent (there are two activity parameters: affinity and inactivation rate constant) which makes it much more difficult to assess selectivity or to elucidate SAR. Second, the transition state structural models required for modelling inactivation cannot be determined experimentally and therefore need to be calculated using computationally intensive quantum mechanical methods.

I’ll start my review with a couple of general comments. Intracellular concentration is factor that is not always fully appreciated in chemical biology and I generally recommend that people writing about chemical probes demonstrate awareness of SR2019 (Intracellular and Intraorgan Concentrations of Small Molecule Drugs: Theory, Uncertainties in Infectious Diseases and Oncology, and Promise). One a more pedantic note I cautioned against using ‘molecule’ as a synonym for ‘compound’ in my review of S2023 (Systematic literature review reveals suboptimal use of chemical probes in cell-based biomedical research) and I suggest that “covalent molecule” might be something that you don't want to see in the text of an article in a chemistry journal.

However, significant efforts need to be invested into characterizing and validating covalent molecules as a prerequisite for conclusive use in biomedical research and target validation studies.

The proposed quality criteria for covalently acting small-molecule probes are given in Figure 2 of H2023 although I’ll be commenting on the text of the article. Subscripting doesn't work well in blogger and so I'll use K.i and k.inact respectively throughout the post to denote the inhibition constant and the first order inactivation rate constant.  

I’ll start with Section 2.1 (Criteria for Assessing Potency of Covalent Probes) and my comments are italicised in red. 

When working with irreversible covalent probes, it is important to consider that target inhibition is time-dependent and therefore IC50 values, while frequently used, are a suboptimal descriptor of potency. (21) Best practice is to use k.inact (the rate of inactivation) over K.i (the affinity for the target) values instead. (22) [I recommend that values of both k.inact and K.i be reported since because this enables the extent of non-covalent target engagement by the chemical probe to be assessed. Regardless of whether binding to target is covalent or non-covalent, the concentration and affinity of substrates (as well as cofactors such as ATP) need be properly accounted for when interpreting effects of chemical probes in cell-based assays. This is a significant issue for ATP-competitive kinase inhibitors (as discussed in my review of S2023) and I recommend this tweetorial from Keith Hornberger.]

As measurement of k.inact/K.i values can be labor-intensive (or in certain cases technically impossible), IC50 values (or target engagement TE50 values) are often reported for covalent leads and used to generate structure–activity relationships (SARs). [The labor-intensive nature of the measurements is not a valid justification for a failure to measure k.inact and K.i values for a covalent chemical probe.]  Carefully designed biochemical assays used in determining IC50 values can be well-suited as surrogates for k.inact/K.i measurements. (24) [It is my understanding that the primary reason for doing this is to increase the throughput of irreversible inhibition assays for SAR optimization and I would generally be extremely wary of any IC50 value measured for an irreversible inhibitor if it had not been technically impossible to measure k.inact or K.i values for the inhibitor.]

2.2. Criteria for Assessing Covalent Probe Selectivity

We propose a selectivity factor of 30-fold in favor of the intended target of the probe compared to that of other family members or identified off-targets under comparable assay conditions. [The authors need to be clearer as to which measure of ‘activity’ they propose should be used for calculating the ratio and some justification for the ratio (why 30-fold rather than 50-fold or 25-fold?) should be given. Regardless of whether binding to target is covalent or non-covalent, the concentration and affinity of substrates (as well as cofactors such as ATP) need to be properly accounted for when assessing selectivity. It is not clear how the selectivity factor should be defined to quantify selectivity of an inhibitor that binds covalently to the target but non-covalently to off-targets. My comments on the THZ1 probe in my review of the S2023 study may be relevant.]

2.3. Chemical Matter Criteria for Covalent Probes

Ideally, the on-target activity of the covalent probe is not dominated by the reactive warhead, but the rest of the molecule provides a measurable reversible affinity for the intended target. [My view is that the reversible affinity of the probe should be greater than simply what is measurable and I suggest, with some liberal arm-waving, that a K.i cutoff of  ~100 nM might be more useful (a K.i value of 10 μM is usually measurable provided that the inhibitor is adequately soluble in assay buffer).] Seeing SARs over 1–2 log units of activity resulting from core, substitution, and warhead changes is an important quality criterion for covalent probe molecules. [The authors need to be clearer about which ‘activity’ they are referring to (differences in K.i and k.inact values between compounds are likely to be greater than the corresponding differences in k.inact/K.i values). The criterion “SAR for covalent and non-covalent interactions” shown in Figure 2 is nonsensical.]

3.3. Chemical Matter Criteria for Degrader Probes

When selecting chemical degrader probes, it is recommended that a chemist critically assesses the chemical structure of the degrader for the presence of chemical groups that impart polypharmacology or interfere with assay read-outs (PAINs motifs). (78) [I certainly agree that chemists should critically assess chemical structures of probes and, if performing a critical assessment of this nature for a degrader probe, I would be taking a look in ChEMBL to see what’s known for structurally-related compounds. I consider the risk of discarding acceptable chemical matter on the basis of matches with PAINS substructures to be low although there’s a lot more to critical assessment of chemical structures than simply checking for matches against PAINS substructures. My view is that genuine promiscuity (as opposed to frequent hitter behavior resulting from interference with read-out) cannot generally be linked to chemical groups. As noted in K2017 the PAINS substructure model introduced in BH2010 was actually trained on the output of six AlphaScreen assays and the applicability domain of the model should be regarded as prediction of frequent-hitter behavior in this assay panel rather than interference with assay read-outs (that said the most plausible explanation for frequent-hitter behavior in the PAINS assay panel is interference with the AlphaScreen read-out by compounds that quench or react with singlet oxygen). My recommendation is that chemical matter criteria for chemical probes should be specified entirely in terms of measured data and the models used to select/screen potentially acceptable chemical matter should not be included in the chemical matter criteria.] 

This is a good point to wrap up my contribution to the important scientific discussion that H2023 is intended to jumpstart. While some of what I've written might be seen as nitpicking please bear in mind that quality criteria for chemical probes need to be defined precisely in order to be useful to the chemical biology and medicinal chemistry communities.

Wednesday 6 December 2023

Are fused tetrahydroquinolines interfering with your assay?

I’ll be taking a look at B2023 (Fused Tetrahydroquinolines Are Interfering with Your Assay) in this post. The article has already been discussed in posts at Practical Fragments and In The Pipeline. In anticipation of the stock straw man counterarguments to my criticisms of PAINS filters, I must stress that there is absolutely no suggestion that compounds matching PAINS filters are necessarily benign. The authors have shown that fusion of cyclopentene at C3-C4 of the tetrahydroquinoline (THQ) ring system is associated with a risk of chemical instability and I consider this to be extremely useful information for anybody thinking about using this scaffold. However, the authors do also appear to be making a number of claims that are not supported by evidence and, in my view, have not demonstrated that the chemical instability leads to pan-assay interference or even frequent-hitter behavior.   

The term ‘PAINS’ crops up frequently in B2023 (the authors even refer to “the PAINS concept” although I think that’s pushing things a bit) and I’ll start by saying something about two general types of nuisance behavior of compounds in assays and these points are discussed in more detail in K2017 (Comment on The Ecstasy and Agony of Assay Interference Compounds). From the perspective of screening libraries of compounds for biological activity, the two types of nuisance behavior are very different problems that need to be considered very differently. One criticism that can be made of both BH2010 (original PAINS study) and BW2014 (Chemical con artists foil drug discovery) is that neither study considers the differing implications for drug discovery of these two types of nuisance behavior.

The first type of nuisance behavior in assays is interference with assay read-out and when ‘activity’ in an assay is due to assay interference hits can accurately be described as ‘false positives’ (this should be seen as a problem with the assay rather than the compound). Interference with assay read-outs is certainly irksome when you’re analysing output from screens because you don’t know if the ‘activity’ is real or not. However, if you’re able to demonstrate genuine activity for a compound using an assay with a read-out for which interference is not an issue then interference with other assay read-outs is irrelevant and would not rule out the compound as a viable starting point for further investigation. Interference with assay read-outs generally increases with the concentration of the compound in the assay (this is why biophysical methods are often favored for screening fragments) and I’ll direct readers to a helpful article by former colleagues. It’s also worth noting that interference with read-out can also lead to false negatives. 

The second type of nuisance behavior is that the compound acts on a target by an undesirable mechanism of action (MoA) and it is not accurate to describe hits behaving in this manner as ‘false positives’ because the effect on the target is real (this should be seen as a problem with the compound rather than the assay). In contrast to interference with read-out, an undesirable MoA is a show-stopper. An undesirable MoA with which many drug discovery scientists will be familiar is colloidal aggregate formation (see M2003) and the problem can be assessed by running the assay in the absence and presence of detergent (see FS2006). In some cases patterns in screening output may point to an undesirable MoA. For example, cysteine reactivity might be indicated by compounds hitting in multiple assays for inhibition of enzymes that use feature cysteine in their catalytic mechanisms.

I’ll make some comments on PAINS filters before I discuss B2023 in detail and much of what I’ll be saying has already been said in K2017 and C2017 (Phantom PAINS: Problems with the Utility of Alerts for Pan-Assay INterference CompoundS) although you shouldn’t need to consult these articles in order to read the blog post unless you want to get some more detail. The PAINS filter model introduced in BH2010 consists of number of substructures which are claimed (I say “claimed” because the assay results and associated chemical structures are proprietary) to be associated with frequent hitter behavior in a panel of six assays that all use the AlphaScreen read-out (compounds that react with or quench singlet oxygen have the potential of interfere with this read-out). I argued in K2017 that six assays, all using the same read-out, do not constitute a credible basis for the design of an experiment to detect pan-assay interference. Put another way, the narrow scope of the data used to train the PAINS filter model restricts the applicability domain of this model to prediction of frequent-hitter behavior in these six assays. The BH2010 study does not appear present a single example of a compound that has been actually been demonstrated by experiment to exhibit pan-assay interference.

The B2023 study reports that tetrahydroquinolines (THQs) fused at C3-C4 with cyclopentene (1) are unstable. This is valuable information for anybody who may be have the misfortune to be working with this particular scaffold and the observed instability implies that drug discovery scientists should also be extremely wary of any biological activity reported for compounds that incorporate this scaffold. Furthermore, the authors show that the instability can be linked to the presence of the carbon-carbon double bond in the ‘third ring’ since 2, the dihydro analog of 1, appears to be stable. I would certainly mention the chemical instability reported in B2023 if reviewing a manuscript that reported biological activity for compounds based on this scaffold. However, I would not mention that BH2010 has stated that the scaffold matches the anil_alk_ene (SLN: C[1]:C:C:C[4]:C(:C:@1)NCC[9]C@4C=CC@9 ) PAINS substructure because the nuisance behavior consists of hitting frequently in a six-assay panel of questionable relevance and the PAINS filters were based on analysis of proprietary data.

Although I wouldn’t have predicted the chemical instability reported for 1 by B2023, this scaffold is certainly not a structural feature that I would have taken into lead optimization with any enthusiasm (a hydrogen that is simultaneously benzylic and allylic does rather look like a free lunch for the CYPs). I would still be concerned about instability even if methylene groups were added to or deleted from the aliphatic parts of 1. I suspect that the electron-releasing nitrogen of 1 contributes to chemical instability although I don’t think that changing nitrogen for another atom type would eliminate the risk of chemical instability. Put another way, the instability observed for 1 should raise questions about the stability of a number of structurally-related scaffolds. Chemical instability is (or at least should be) a show-stopper in the context of drug discovery even if doesn't lead to interference with assay read-out, an undesirable MoA or pan-assay interference.

I certainly consider the instability observed for 1 to be of interest and relevant to a number of structurally-related chemotypes. However, I have a number of concerns about B2023 and one specific criticism is that the authors use “tricyclic/fused THQ” as a synonym throughout the text as a synonym for “tricyclic/fused THQ with a carbon-carbon double bond in the ‘third’ ring”. At best this is confusing and it could lead to groundless criticism, either publicly or in peer review, of a study that reported assay results for compounds based on the scaffold in 2A more general point is that the authors make a number of claims that, in my view, are not adequately supported by evidence. I’ll start with the significance section and my comments are italicized in red:

Tricyclic tetrahydroquinolines (THQs) are a family of lesser studied pan-assay interference compounds (PAINS) [The authors need to provide specific examples of tricyclic THQs that have been actually been shown to exhibit pan-assay interference to support this claim.] These compounds are found ubiquitously throughout commercial and academic small molecule screening libraries. [The authors do not appear to have presented evidence to support this claim and the presence of compounds in vendor catalogues does not prove that the compounds are actually being screened. In my view, the authors appear to be trying to ‘talk up’ the significance of their findings by making this statement.] Accordingly, they have been identified as hits in high-throughput screening campaigns for diverse protein targets. We demonstrate that fused THQs are reactive when stored in solution under standard laboratory conditions and caution investigators from investing additional resource into validating these nuisance compounds.

Continuing with the introduction

Fused tetrahydroquinolines (THQs) are frequent hitters in hit discovery campaigns. [In my view the authors have not presented sufficient evidence to support this statement and I don’t consider claims made in the BH2010 for frequent-hitter behavior by compounds matching the anil_alk_ene PAINS substructure to be admissible as evidence simply because they are based on proprietary data. In any case the numbers of compounds matching the anil_alk_ene PAINS substructure and reported in BH2010 to hit in zero (17) or one (11) assays in the PAINS assay panel suggest that 28 compounds (of a total of 51 substructural matches) cannot be regarded as frequent-hitters in this assay panel.]  Pan-assay interference compounds (PAINS) have been controversial in the recent literature. While some literature supports these as nuisance compounds, other papers describe PAINS as potentially valuable leads. (1 | 2 | 3 | 4) [The C2017 study referenced as 2 is actually a critique of PAINS filters and I’m assuming that the authors aren’t suggesting that it “supports these [PAINS] as nuisance compounds”. However, I would consider it a gross misrepresentation of C2017 to imply that the study describes “PAINS as potentially valuable leads”.] There have been descriptions of many different classes of PAINS that vary in their frequency of occurrence as hits in the screening literature. [In my view, the number of articles on PAINS appears to greatly exceed the number of compounds that have actually been shown to exhibit pan-assay interference.]

The number of papers that selected this scaffold during hit discovery campaigns from multiple chemical libraries supports the idea that fused THQs are frequent hitters. [Let’s take a closer look at what the authors are suggesting by considering a selection of compounds, each of which has a benzene ring in its molecular structure. Now let’s suppose that each of a large number of targets is hit by at least one of the compounds in this selection (I could easily satisfy this requirement by selecting marketed drugs with benzene rings in their molecular structures). Applying the same logic as the authors, I could use these observations to support the idea that compounds incorporating benzene rings in their molecular structures are frequent-hitters. In my view the B2023 study doesn’t appear to have presented a single example of a fused THQ that has actually been shown experimentally to exhibit frequent-hitter behavior. As mentioned earlier in this post less than half of the compounds matching the anil_alk_ene PAINS substructure that were evaluated in the BH2010 assay panel can be regarded as frequent-hitters.] At first glance, these compounds appear to be valid, optimizable hits, with reasonable physicochemical properties. Although micromolar and reproducible activity has been reported for multiple THQ analogues on many protein targets, hit-to-lead optimization programs aimed at improving the initial hits (Supporting Information (SI), Table S1) have resulted in no improvement in potency or no discernible structure–activity relationships (SAR) [Achieving increased potency and establishing SARs are certainly important objectives in hit-to-lead studies. However, assertions that hit-to-lead optimizations “have resulted in no improvement in potency or no discernible structure–activity relationships” do need to be supported with appropriate discussion of specific hit-to-lead optimization studies.]  

Examples of Fused THQs as “Hits” Are Pervasive

The diversity of protein targets captured below supports the premise that the fused THQ scaffold does not yield specific hits for these proteins but that the reported activity is a result of pan-assay interference. [I could use an argument analogous to the one that I’ve just used for frequent-hitters to ‘prove’ that compounds with benzene rings in their molecular structure do not yield specific hits and that any reported activity is due to pan-assay interference. The authors do not appear to have presented a single example of a fused THQ that has been shown by experiment to exhibit pan-assay interference.]

Concluding remarks

Our review and evidence-based experiments solidify the idea that tricyclic THQs are nuisance compounds that cause pan-assay interference in the majority of screens rather than privileged structures worthy of chemical optimization. [While I certainly agree that chemical instability would constitute a nuisance, I would consider it wildly extravagant to claim that tricyclic THQs can “cause pan assay interference” since nobody appears to have actually observed pan-assay interference for even a single tricyclic THQ.] Their widespread micromolar activities on a broad range of proteins with diverse assay readouts support our assertion that they are unlikely to be valid hits. [As stated previously, I do not consider that “widespread micromolar activities on a broad range of proteins” observed for compounds that share a particular structural feature implies that all compounds with the particular structural feature are unlikely to be valid hits.]

So that concludes my review of the B2023 study. I really liked the experimental work that revealed the instability of 1 and linked it to the presence of the double bond in the 'third' ring.  Furthermore, these experimental results would (at least for me) raise questions about the chemical stability of some scaffolds that are structurally-related to 1. However, I found the analysis of the bioactivity data reported in the literature for fused THQs to be unconvincing to the extent that it significantly weakened the B2023 study.