Skip to main content
SearchLogin or Signup

Comment on "Reproducibility and Replication of Experimental Particle Physics Results"

A Letter to the Editor

Published onApr 30, 2021
Comment on "Reproducibility and Replication of Experimental Particle Physics Results"

Keywords: replication, errors, error control, testing, p-values

I would like to thank Junk and Lyons (2020) for beginning a discussion about replication in high-energy physics (HEP). Junk and Lyons ultimately argue that HEP learned its lessons the hard way through past failures and that other fields could learn from our procedures. They emphasize that experimental collaborations would risk their legacies were they to make a type-1 error in a search for new physics and outline the vigilance taken to avoid one, such as data blinding and a strict 5σ5\sigma threshold.

The discussion, however, ignores an elephant in the room: There are regularly anomalies in searches for new physics that result in substantial scientific activity but don’t replicate with more data. For example, in 2015 ATLAS and CMS showed evidence for a new particle with a mass of about 750GeV750\,\text{GeV} that decayed into two photons (CERN, 2015). Whilst the statistical significance was never greater than 5σ5\sigma (Aaboud et al., 2016; Khachatryan et al., 2016), the results motivated about 500 publications about the new particle, and countless special seminars and talks (Garisto, 2016). The effect did not replicate when the experimental teams analyzed a larger dataset about six months later (Aaboud et al., 2017; Khachatryan et al., 2017). Although this was a particularly egregious example, experimental anomalies that garner considerable interest before vanishing are annual events (Garisto, 2020).

We are motivated to attempt to control the type-1 error rate because type-1 errors damage our credibility and lead to us squandering our time and resources on spurious effects. Whilst these non-replications aren’t strictly type-1 errors as the statistical significance didn’t reach the 5σ5\sigma threshold and no discoveries were announced, we incur similar damaging consequences, so they cannot be ignored. I shall refer to these errors—substantial scientific activity including publicly doubting the null and speculating about new effects when the null was in fact true—as type-1{}^\prime errors. Whilst type-1 errors appear to be under control in HEP, type-1{}^\prime errors are rampant. In the following sections, I discuss these errors in the context of statistical practices at the Large Hadron Collider (LHC).

Evidence and Error Rates

Searches for new physics at the LHC are performed by comparing a p-value, pp, against a pre-specified threshold, α\alpha.

There are two common interpretations of this procedure (Hubbard & Bayarri, 2003):

  1. Error theoretic (Neymaan & Pearson, 1933): By rejecting the null if p<α,p < \alpha,we ensure a long-run type-1 error rate of α\alpha. The threshold α\alpha specified the desired type-1 error rate and the p-value was a means to achieving it.

  2. Evidential (Fisher, 1925): The p-value is a measure of evidence of against the null hypothesis. The threshold α\alpha specified a desired level of evidence.

Even among adherents of p-values, the latter interpretation is considered unwarranted (Lakens, 2021), and it is almost never accompanied by a theoretical framework or justification, or a discussion of the desired and actual properties of pp as a measure of evidence.

Unfortunately, Junk and Lyons repeatedly implicitly switch from one to the other. Indeed, the authors (2020) interpret pp as a measure of evidence and α\alpha as a threshold in evidence, e.g., justifying 5σ5\sigma by “extraordinary claims require extraordinary evidence” and stating that “[3σ3\sigma] or greater constitutes ‘evidence’.” We know, however, that interpreted as a measure of evidence, pp is incoherent (Schervish, 1996; Wagenmakers, 2007) and usually overstates the evidence against the null (Berger & Sellke, 1987; Sellke et al., 2001). For example, there exists a famous bound (Sellke et al., 2001; Vovk, 1993) implying that under mild assumptions p=0.05p = 0.05 corresponds to about 30%30\% posterior probability of the null. This was in fact the primary criticism in (Benjamin et al., 2017). Consequently, one factor in the prevalence of type-1{}^\prime errors may be that:

  1. physicists interpret p-values as evidence (as do Junk and Lyons);

  2. based on p-values, physicists overestimate the evidence for new effects;

  3. substantial scientific activity on what turn out to be spurious effects

Unfortunately, p-values simply can’t give researchers (including Junk and Lyons) what they want—a measure of evidence—leading to wishful and misleading interpretations of pp as evidence (Cohen, 1994). This cannot be overcome by better statistical training; it is an inherent deficiency of p-values and no amount of education about them will imbue them with a coherent evidential meaning.

Controlling Errors

Controlling error rates depends critically on knowing the data collection and analysis plan—the intentions of the researchers and what statistical tests would be performed under what circumstances—and adjusting the p-value to reflect that. There are, however, an extraordinary number of tests performed by ATLAS, CMS and LHCb at the LHC and elsewhere. This already makes it challenging to interpret a p-value at all and undoubtedly contributes to the prevalence of type-1{}^\prime errors.

Junk and Lyons rightly celebrate the trend in HEP to publicly release datasets and tools for analyzing them. This, however, raises the specter of data dredging. Massive public datasets (CERN, 2020) combined with recent developments in machine learning (Kasieczka et al, 2021) could enable dredging at an unprecedented scale. We must think about what precautions we need to prevent misleading inferences being drawn in the future; e.g., pre-registration of planned analyses as a requisite to accessing otherwise open data. Other more radical proposals, to the problems here and elsewhere, include moving away from an error theoretic approach, or any approach based on p-values.


Aaboud, M., et al. (2016). Search for resonances in diphoton events at s=13\sqrt s = 13 TeV with the ATLAS detector. Journal of High Energy Physics, 9.

Aaboud, M., et al. (2017). Search for new phenomena in high-mass diphoton final states using 37 fb-1 of proton–proton collisions collected at s=13\sqrt s = 13 TeV with the ATLAS detector. Physics Letters B, 775, 105–125.

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E., et al. (2017). Redefine statistical significance. Nature Human Behaviour, 2, 6–10.

Berger, J. O., & Sellke, T. (1987). Testing a point null hypothesis: The irreconcilability of p values and evidence. Journal of the American Statistical Association, 82(397), 112–122.

CERN. (2015). [ATLAS and cms physics results from run 2].

CERN. (2020). [CERN announces new open data policy in support of open science].

Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49(12), 997–1003.

Fisher, R. A. (1925). Statistical methods for research workers. Oliver & Boyd.

Garisto, D. (2020). [The era of anomalies]. Physics, 13, 79.

Garisto, R. (2016). Editorial: Theorists React to the CERN 750 GeV Diphoton Data. Physical Review Letters, 116(15).

Hubbard, R., & Bayarri, M. J. (2003). Confusion over measures of evidence (p’s) versus errors (α\alpha’s) in classical statistical testing. American Statistics, 57(3), 171–178.

Junk, T. R., & Lyons, L. (2020). Reproducibility and Replication of Experimental Particle Physics Results. Harvard Data Science Review, 2(4).

Kasieczka, G., et al. (2021). The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics.

Khachatryan, V., et al. (2016). Search for Resonant Production of High-Mass Photon Pairs in Proton-Proton Collisions at s=8\sqrt s = 8 and 13 TeV. Physical Review Letters, 117(5), 051802.

Khachatryan, V., et al. (2017). Search for high-mass diphoton resonances in protonproton collisions at 13 TeV and combination with 8 TeV search. Physics Letters B, 767, 147–170.

Lakens, D. (2021). The practical alternative to the p value is the correctly used p value. Perspectives on Psychological Science.

Neyman, J., & Pearson, E. S. (1933). On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 231, 289–337.

Schervish, M. J. (1996). P values: What they are and what they are not. American Statistics, 50(3), 203–206.

Sellke, T., Bayarri, M. J., & Berger, J. O. (2001). Calibration of p values for testing precise null hypotheses. American Statistics, 55(1), 62–71.

Vovk, V. G. (1993). A logic of probability, with application to the foundations of statistics. Journal of the Royal Statistical Society, B55(2), 317–341.

Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14, 779–804.

This article is © 2021 by the author(s). The editorial is licensed under a Creative Commons Attribution (CC BY 4.0) International license (, except where otherwise indicated with respect to particular material included in the article. The article should be attributed to the authors identified above.

1 of 1
A Reply to this Pub

No comments here