I've been meaning to take a look at the Seven Year Itch (SYI) article on PAINS for some time. SYI looks back over the the preceding 7 years of PAINS while presenting a view of future directions. One general comment that I would make of SYI is that it appears to try to counter criticisms of PAINS filters without explicitly acknowledging these criticisms.
This will a long post and strong coffee may be required. Before starting, it must be stressed that I neither deny that assay interference is a significant problem nor do I assert that compounds identified by PAINS filters are benign. The essence of my criticism of much of the PAINS analysis is that the rhetoric is simply not supported by the data. It has always been easy to opine that chemical structures look unwholesome but it has always been rather more difficult to demonstrate that compounds are behaving pathologically in assays. One observation that I would make about modern drug discovery is that fact and opinion often become entangled to the extent that those who express (and seek to influence) opinions are no longer capable of distinguishing what they know from what they believe.
I've included some photos to break up the text a bit and these are from a 2016 visit to the north of Vietnam. I'll start with this one taken from the western shore of Hoan Kiem Lake the night after the supermoon.
I found SYI to be something of a propaganda piece with all the coherence of a six-hour Fidel Castro harangue. As is typical for articles in the PAINS literature, SYI is heavy in speculation and opinion but is considerably lighter in facts and measured data. It wastes little time in letting readers know how many times the original PAINS article was cited. One criticism that I have made about the original PAINS article (that also applies to SYI and the articles in between) is that the article neither defines the term PAINS (other than to expand the acronym) nor does it provide objective criteria by which a compound can be shown experimentally to be (or not to be) a PAINS (or is that a PAIN). An 'unofficial' definition for the term PAINS has actually been published and I think that it's pretty good:
"PAINS, or pan-assay interference compounds, are compounds that have been observed to show activity in multiple types of assays by interfering with the assay readout rather than through specific compound/target interactions."
While PAINS purists might denounce the creators of the unofficial PAINS definition for heresy and unspecified doctrinal errors, I would argue that the unofficial definition is more useful than the official definition (PAINS are pan-assay interference compounds). I would also point out that some of those who introduced the unofficial definition actually use experiments to study assay interference when much of the official PAINSology (or should that be PAINSomics) consists of speculation about the causes of frequent-hitter behavior. One question that I shall put to you, the reader, is how often, when reading an article on PAINS, do you see real examples of experimental studies that have clearly demonstrated that specific compounds exhibit pan-assay interference?
Although the reception of PAINS filters has generally been positive, JCIM has published two articles (the first by an Associate Editor of that journal and the second by me) that examine the PAINS filters critically from a cheminformatic perspective. The basis of the criticism is that the PAINS filters are predictors of frequent hitter behavior for assays using an AlphaScreen readout and they have been developed using proprietary data. It's a quite a leap from frequent-hitter behavior when tested at single concentrations in a panel of six AlphaScreen assays to pan-assay interference. In the language of cheminfomatics, we can state that the PAINS filters have been extrapolated out of a narrow applicability domain and they have been reported (ref and ref) to be less predictive of frequent-hitter behavior in these situations. One point that I specifically made was that a panel of six assays all using the same readout is a suboptimal design of an experiment to detect and quantify pan-assay interference.
In my article, bad behavior in assays was classified as Type 1 ( assay result gives an incorrect indication of the extent to which the compound affects the function of the target) or Type 2 (compounds affect target function by an undesirable mechanism of action). I used these rather bland labels because I didn't want to become ensnared in a Dien Bien Phu of nomenclature and it must be stressed that there is absolutely no suggestion that other people use these labels. My own preference would actually be to only use the term interference for Type 1 bad behavior and it's worth remembering that Type 1 bad behavior can also lead to false negatives.
The distinction between Type 1 and and Type 2 behaviors is an important and useful one to make from the perspective of drug discovery scientists who are making decisions as to which screening hits to take forward. Type 1 behavior is undesirable because it means that you can't believe the screening result for hits but, provided that you can find an assay (e.g. label-free measurement of affinity) that is not interfered with, Type 1 behavior is a manageable, although irksome, problem. Running a second assay that uses an orthogonal readout may shed light on whether Type 1 behavior is an issue although, in some cases, it may be possible to assess, and even correct for, interference without running the orthogonal second assay. Type 2 behavior is a much more serious problem and a compound that exhibits Type 2 behavior needs to be put out of its misery as swiftly and mercifully as possible. The challenge presented by Type 2 behavior is that you need to establish the mechanism of action simply to determine whether or not it is desirable. Running a second assay with an orthogonal readout is unlikely to provide useful information since the effect on target function is real.
Barbed wire at Strongpoint Béatrice. I'm guessing that it was not far from here that, on the night of 13th/14th March, 1954, Captain Riès would have made the final transmission: "It's all over - the Viets are here. Fire on my position. Out."
Most (all?) of the PAINSology before SYI failed to make any distinction between Type 1 and Type 2 bad behavior. SYI states "There does not seem to be an industry-accepted nomenclature or ontology of anomalous binding behavior" and makes some suggestions as to how this state of affairs might be rectified. SYI recommends that "Actives" be first classified as "target modulators" or "readout modulators". The "target modulators" are all considered to be "true positives" and these are further classified as "true hits" or "false hits". All the "readout modulators" are labelled as "false positives". Unsurprisingly, the authors recommend that all the "false hits" and "false positives" be labelled as pan-assay interference compounds regardless of whether the compounds in question actually exhibit pan-assay interference. In general, I would advise against drawing a distinction between the terms "hit" and "positive" in the context of screening but, if you chose to do so, then you do really do need to define the terms much more precisely than the authors have done.
I think the term "readout modulator" is reasonable and is equivalent to my definition of Type 1 behavior (assay result gives an incorrect indication of the extent to which the compound affects the function of the target). However, I strongly disagree with the classification of compounds showing "non-specific interaction with target leading to active readout" as readout modulators since I'd regard any interaction with the target that affects its function to be modulation. My understanding is that the effects of colloidal aggregators on protein function are real (although not exploitable) and that it is often possible to observe reproducible concentration responses. My advice to the authors is that, if you're going to appropriate colloidal aggregators as PAINS, then you might at least put them in the right category.
While the term "target modulator" is also reasonable, it might not be a such great idea to use it in connection with assay interference since it's also quite a good description of a drug. Consider the possibility of homeopaths and anti-vaxxers denouncing the pharmaceutical industry for poisoning people with target modulators. However, I disagree with the use of the term "false hit" since the modulation of the target is real even when the mechanism of action is not exploitable. There is also a danger of confusing the "false hits" with the "false positives" and SYI is not exactly clear about the distinction between a "hit" and a "positive". In screening both terms tend to be used to specify results for which the readout exceeds a threshold value.
The defensive positions on one of the hills of Strongpoint Béatrice have not been restored. Although the trenches have filled in with time, they are not always as shallow as they appear to be in this photo (as I discovered when I stepped off the path).
It's now time to examine what SYI has to to say and singlet oxygen is as good a place as any to start from. One criticism of PAINS filters that I have made, both in my article and the Molecular Design blog, is that some of the frequent-hitter behavior in the PAINS assay panel may be due to quenching or scavenging of singlet oxygen which is an essential component of the the AlphaScreen readout. SYI states:
"However, while many PAINS classes contain some member compounds that registered as hits in all the assays analyzed and that therefore could be AlphaScreen-specific signal interference compounds, most compounds in such classes signal in only a portion of assays. For these, chemical reactivity that is only induced in some assays is a plausible mechanism for platform-independent assay interference."
The authors seem to be interpreting the observation that a compound only hits in a portion of assays as evidence for platform-independent assay interference. This is actually a very naive argument for a number of reasons. First, compounds do not all appear to have been assayed at the same concentration in the original PAINS assay panel and there may be other sources of variation that were not disclosed. Second, different readout thresholds may have been used for the assays in the panel and noise in the readout introduces a probabilistic element to whether or not the signal for a compound exceeds the threshold. Last, but definitely not least, the molecular structure of a compound does influence the efficiency with which it quenches or scavenges singlet oxygen. A recent study observed that PAINS "alerts appear to encode primarily AlphaScreen promiscuous molecules".
If you read enough PAINS literature, you'll invariably come across sweeping generalizations made about PAINS. For example, it has been claimed that "Most PAINS function as reactive chemicals rather than discriminating drugs." SYI follows this pattern and asserts:
"Another comment we frequently encounter and very relevant to this journal is that PAINS may not be appropriate for drug development but may still comprise useful tool compounds. This is not so, as tool compounds need to be much more pharmacologically precise in order that the biological responses they invoke can be unambiguously interpreted."
While it is encouraging that the authors have finally realized the significance of the distinction between readout modulators and target modulators, they don't seem to be fully aware of the implications of making this distinction. Specifically, one can no longer make the sweeping generalizations about PAINS that are common in PAINS literature. Consider a hypothetical compound that is an efficient quencher of singlet oxygen and that has shown up as a hit in all six AlphaScreen assays of the original PAINS assay panel. While many would consider this compound to be a PAINS (or PAIN), I would strongly challenge a claim that observation of frequent-hitter behavior in this assay panel would be sufficient to rule out the use of the compound as a tool.
SYI notes that PAINS are recognized by other independently developed promiscuity filters.
"The corroboration of PAINS classes by such independent efforts provides strong support for the structural filters and subsequent recognition and awareness of poorly performing compound classes in the literature. It is instructive therefore to introduce two more recent and fully statistically validated frequent-hitter analytical methods that are assay platform-independent. The first was reported in 2014 by AstraZeneca(16) and the second in 2016 by academic researchers and called Badapple.(27)"
I don't think it is particularly surprising (or significant) that some of the PAINS classes are recognized as frequent-hitters by other models for frequent-hitter behavior. What is not clear is how many of the PAINS classes are recognized by the other frequent-hitter models or how 'strong' the recognition is. I would challenge the description of the AstraZeneca frequent-hitter model as "fully statistically validated" since validation was performed using proprietary data. I made a similar criticism of the original PAINS study and would suggest that the authors take a look at what this JCIM editorial has to say about the use of proprietary data in modeling studies.
The French named this place Eliane and it was quieter when I visited than it would have been on 6th May, 1954 when the Viet Minh detonated a large mine beneath the French positions. It has been said that the alphabetically-ordered (Anne-Marie to Isabelle) strongpoints at Dien Bien Phu were named for the mistresses of the commander, Colonel (later General) Christian de Castries although this is unlikely.
"In summary, we have previously discussed a variety of issues key to interpretation of PAINS filter outputs, ranging from HTS library design and screening concentration, relevance of PAINS-bearing FDA-approved drugs, issues in SMARTS to SLN conversion, the reality of nonfrequent hitter PAINS, as well as PAINS and non-PAINS that are respectively not recognized or recognized in the PAINS filters as originally published. However, nowhere has a discussion around these key principles been summarized in one article, and that is the point of the current article. Had this been the case, we believe some recent contributions to the literature would have been more thoughtfully directed. (21,32)"
I must confess that reference to the reality of nonfrequent hitter pan assay interference compounds would normally prompt me to advise authors to stay off the peyote until the manuscript has been safely submitted. However, the bigger problem embedded in the somewhat Rumsfeldesque first sentence is that you need objective and unambiguous criteria by which compounds can be determined to be PAINS or non-PAINS before you can talk about "key principles". You also need to acknowledge that interference with readout and and undesirable mechanisms of action are entirely different problems requiring entirely different solutions.
I noted that recent contributions to the literature from me and from a JCIM Associate Editor (who might know a bit more about cheminformatics than the authors) were criticized for being insufficiently thoughtful. To be criticized in this manner is, as the late, great Denis Healey might have observed, "like being savaged by a dead sheep". Despite what the authors believe, I can confirm that my contribution to the literature would have been very similar even if SYI had been published beforehand. Nevertheless, I would suggest to the authors that dismissing the feedback from a JCIM Associate Editor as if he were a disobedient schoolboy might not have been such a smart move. For example, it could get the JMC editors wondering a bit more about exactly what they'd got themselves into when they decided to endorse a frequent-hitter model as a predictor of pan-assay interference. The endorsement of a predictive model by a premier scientific journal represents a huge benefit to the creators of the model but the flip side is that it also represents a huge risk to the journal.
So that's all that I want to say about PAINS and it's a good point to wrap things up so that I can return to Vietnam for the remainder of the post.
I'm pretty sure that neither General Giap nor General de Castries visited the summit of Fansipan which at 3143 meters is the highest point in Vietnam (I wouldn't have either had a cable car had not been installed a few months before I visited). It's a great place to enjoy the sunset.
Back in Hanoi, I attempted to pay my respects to Uncle Ho, as I've done on two previous visits to this city, but timing was not great (they were doing the annual formaldehyde change). Uncle Ho is in much better shape than Chairman Mao who is actually seven years 'younger' and this is a consequence of having been embalmed by the Russians (the acknowledged experts in this field). Chairman Mao had the misfortune to expire when Sino-Soviet relations were particularly frosty and his pickling was left to some of his less expert fellow citizens. It is also said that the Russian embalming team arrived in Hanoi before Uncle Ho had actually expired...
Catching up with Uncle Ho