Origem do limiar de “5


33

As notícias dizem que o CERN anunciará amanhã que o bóson de Higgs foi detectado experimentalmente com 5 σ evidência. Segundo esse artigo:

5 σ equivale a 99,9994% de chance dos dados que os detectores CMS e ATLAS estão vendo não são apenas ruídos aleatórios - e 0,00006% de chance de terem sido enganados; 5 σ é a certeza necessária para que algo seja oficialmente rotulado de "descoberta" científica.

Isso não é super rigoroso, mas parece dizer que os físicos usam a metodologia estatística padrão de "teste de hipóteses", definindo α como 0.0000006 , que corresponde a z=5 (bicaudal)? Ou existe algum outro significado?

Em boa parte da ciência, é claro, definir alfa como 0,05 é feito rotineiramente. Isso seria equivalente a "dois- σ " evidências, embora eu nunca tenha ouvido falar disso. Existem outros campos (além da física de partículas) em que uma definição muito mais rigorosa de alfa é padrão? Alguém conhece uma referência sobre como a regra dos cinco σ foi aceita pela física de partículas?

Atualização: estou fazendo esta pergunta por um motivo simples. Meu livro Bioestatística Intuitiva (como a maioria dos livros de estatísticas) tem uma seção que explica o quão arbitrária é a regra usual "P <0,05". Eu gostaria de adicionar este exemplo de um campo científico em que um valor muito (muito!) Menor de é considerado necessário. Mas se o exemplo é realmente mais complicado, com o uso de métodos bayesianos (como sugerem alguns comentários abaixo), não seria muito adequado ou exigiria muito mais explicações.α


2
Já ouviu falar em "Six Sigma" ?
Daniel R Hicks

No controle de qualidade, seis sigma é considerado como Daniel sugere com sua pergunta / observação. Todas essas probabilidades de rejeição assumem a amostragem de uma distribuição normal e as probabilidades de cauda podem ser maiores para outras distribuições. Usar extremos como 5 ou 6 sigma só pode ser útil em circunstâncias especiais. Na prática, o tamanho da amostra e a variabilidade nos dados inviabilizam a inferência além de 2 ou 3 sigma.
22660 Michael Jackson Chernick

1
Basicamente, a maioria dos físicos de partículas se sente mais à vontade com idéias bayesianas ao calcular parâmetros, de modo que eles têm " certeza, dados e dados anteriores, de que o sinal de Higgs não é zero", o que certamente é diferente de dizer que existe é apenas "0,01% de chance do sinal ser ruído aleatório" (também existem flutuações não aleatórias decorrentes da sistemática!). [1]: physics.stackexchange.com/questions/8752/…X%
Néstor

3
@Néstor: I'm watching the live broadcast of the Higgs press conference now, and no one is mentioning Bayesian interpretations. "p-values" and "significance level" are used, but only horribly misinformed Bayesian would interpret those as probabilities that the signal is random noise. I think that the text in the quote in the OP's question simply is a misinterpretation of what a p-value really are.
MånsT

1
BTW I did a blog post on my blog about this issue: randomastronomy.wordpress.com.
Néstor

Respostas:


13

In most applications of statistics there is that old chestnut about 'all models are wrong, some are useful'. This being the case, we would only expected a model to perform at a given level since we are describing some incredibly complicated process using some simple model.

Physics is very different, so intuition developed from statistical models isn't so appropriate. In Physics, in particular particle physics which deals directly with fundamental physical laws, the model really is supposed to be an exact description of reality. Any departure from what the model predicts must be completely explained by experimental noise, not a limitation of the model. This means that if the model is good and correct and the experimental apparatus understood the statistical significance should be very high, hence the high bar that is set.

The other reason is historical, the particle physics community has been burned in the past by 'discoveries' at lower significance levels being later retracted, hence they are generally more cautious now.


1
Do you agree that physics uses standard statistical hypothesis testing with a very low alpha (in this case, anyway). Or do they use some kind of Bayesian approach as Nestor said in a comment above?
Harvey Motulsky

2
My understanding from talking to some of the people I know who work on ATLAS is that the analysis is all very Bayesian. However they are lower level guys (i.e. the ones who actually do the work). It wouldn't surprise me if some of the talking heads higher up the chain had a poorer grasp of the interpretation. That being said, the presentation of the LHC results was pretty poor, and didn't really come across as very Bayesian, as others have noted.
Bogdanovist

2
I've always thought that particle physics in particular also dealt with billions of events, so you have to set the bar very high.
Wayne

11

History and origin

According to Robert D Cousins1 and Tommaso Dorigo2, the origin of the 5σ threshold origin lies in the early particle physics work of the 60s when numerous histograms of scattering experiments were investigated and searched for peaks/bumps that might indicate some newly discovered particle. The threshold is a rough rule to account for the multiple comparisons that are being made.

Both authors refer to a 1968 article from Rosenfeld3, which dealt with the question whether or not there are far out mesons and baryons, for which several 4σ effects where measured. The article answered the question negatively by arguing that the number of published claims corresponds to the statistically expected number of fluctuations. Along with several calculations supporting this argument the article promoted the use of the 5σ level:

Rosenfeld: "Before we go on to survey far-out mass spectra where bumps have been reported in (Kππ)3/2,(πρ) we should first decide what threshold of significance to demand in 1968. I want to show you that although experimentalists should probably note 3σ-effects, theoreticians and phenomenologists would do better to wait till the effect reaches >4σ."

and later in the paper (emphasis is mine)

Rosenfeld: "Then to repeat my warning at the beginning of this section; we are generating at least 100 000 potential bumps per year, and should expect several 4σ and hundreds of 3σ fluctuations. What are the implications? To the theoretician or phenomenologist the moral is simple; wait for 5σ effects."

Tommaso seems to be careful in stating that it started with the Rosenfeld article

Tommaso: "However, we should note that the article was written in 1968, but the strict criterion of five standard deviations for discovery claims was not adopted in the seventies and eighties. For instance, no such thing as a five-sigma criterion was used for the discovery of the W and Z bosons, which earned Rubbia and Van der Meer the Nobel Prize in physics in 1984."

But in the 80s the use of 5σ was spread out. For instance, the astronomer Steve Schneider4 mentions in 1989 that it is something being taught (emphasize mine in the quote below):

Schneider: "Frequently, 'levels of confidence' of 95% or 99% are quoted for apparently discrepant data, but this amounts to only two or three statistical sigmas. I was taught not to believe anything less than five sigma, which if you think about it is an absurdly stringent requirement --- something like a 99.9999% confidence level. But of course, such a limit is used because the actual size of sigma is almost never known. There are just too many free variables in astronomy that we can't control or don't know about."

Yet, in the field of particle physics many publications where still based on 4σ discrepancies up till the late 90s. This only changed into 5σ at the beginnning of the 21th century. It is probably prescribed as a guidline for publications around 2003 (see the prologue in Franklin's book Shifting Standards5)

Franklin: By 2003 the 5-standard-deviation criterion for "observation of" seems to have been in effect

...

A member of the BaBar collaboration recalls that about this time the 5-sigma criterion was issued as a guideline by the editors of the Physical Review Letters


Modern use

Currently, the 5σ threshold is a textbook standard. For instance, it occurs as a standard article on physics.org6 or in some of Glen Cowan's works, such as the statistics section of the Review of Particle Physics from the particle data group7 (albeit with several critical sidenotes)

Glen Cowan: Often in HEP, the level of significance where an effect is said to qualify as a discovery is Z=5, i.e., a 5σ effect, corresponding to a p-value of 2.87×107 . One’s actual degree of belief that a new process is present, however, will depend in general on other factors as well, such as the plausibility of the new signal hypothesis and the degree to which it can describe the data, one’s confidence in the model that led to the observed p-value, and possible corrections for multiple observations out of which one focuses on the smallest p-value obtained (the “look-elsewhere effect”).

The use of the 5σ level is now ascribed to 4 reasons:

  • History based on practice one found that 5σ is a good threshold. (exotic stuff seems to happen randomly, even between 3σ to 4σ, like recently the 750 GeV diphoton excess)

  • The look elsewhere effect (or the multiple comparisons). Either because multiple hypotheses are tested, or because experiments are performed many times, people adjust for this (very roughly) by adjusting the bound to 5σ. This relates to the history argument.

  • Systematic effects and uncertainty in σ often the uncertainty of the experiment outcome is not well known. The σ is derived, but the derivation includes weak assumptions such as the absence of systematic effects, or the possibility to ignore them. Increasing the threshold seems to be a way to sort of a protect against these events. (This is a bit strange though. The computed σ has no relation to the size of systematic effects and the logic breaks down, an example is the "discovery" of superluminal neutrino's which was reported to be having a 6σ significance.)

  • Extraordinary claims require extraordinary evidence Scientific results are reported in a frequentist way, for instance using confidence intervals or p-values. But, they are often interpreted in a Bayesian way. The 5σ level is claimed to account for this.

Currently several criticisms have been written about the 5σ threshold by Louis Lyons8,9, and also the earlier mentioned articles by Robert D Cousins1 and Tommaso Dorigo2 provide critique.


Other Fields

It is interesting to note that many other scientific fields do not have similar thresholds or do not, somehow, deal with the issue. I imagine this makes a bit sense in the case of experiments with humans where it is very costly (or impossible) to extend an experiment that gave a .05 or .01 significance.

The result of not taking these effects into account is that over half of the published results may be wrong or at least are not reproducible (This has been argued for the case of psychology by Monya Baker 10, and I believe there are many others that made similar arguments. I personaly think that the situation may be even worse in nutritional science). And now, people from other fields than physics are thinking about how they should deal with this issue (the case of medicine/pharmacology11).


  1. Cousins, R. D. (2017). The Jeffreys–Lindley paradox and discovery criteria in high energy physics. Synthese, 194(2), 395-432. arxiv link

  2. Dorigo, T. (2013) Demystifying The Five-Sigma Criterion, from science20.com 2019-03-07

  3. Rosenfeld, A. H. (1968). Are there any far-out mesons or baryons? web-source: escholarship

  4. Burbidge, G., Roberts, M., Schneider, S., Sharp, N., & Tifft, W. (1990, November). Panel discussion: Redshift related problems. In NASA Conference Publication (Vol. 3098, p. 462). link to photocopy on harvard.edu

  5. Franklin, A. (2013). Shifting standards: Experiments in particle physics in the twentieth century. University of Pittsburgh Press.

  6. What does the 5 sigma mean? from physics.org 2019-03-07

  7. Beringer, J., Arguin, J. F., Barnett, R. M., Copic, K., Dahl, O., Groom, D. E., ... & Yao, W. M. (2012). Review of particle physics. Physical Review D-Particles, Fields, Gravitation and Cosmology, 86(1), 010001. (section 36.2.2. Significance tests, page 394, link aps.org )

  8. Lyons, L. (2013). Discovering the Significance of 5 sigma. arXiv preprint arXiv:1310.1284. arxiv link

  9. Lyons, L. (2014). Statistical Issues in Searches for New Physics. arXiv preprint arxiv link

  10. Baker, M. (2015). Over half of psychology studies fail reproducibility test. Nature News. from nature.com 2019-03-07

  11. Horton, R. (2015). Offline: what is medicine's 5 sigma?. The Lancet, 385(9976), 1380. from thelancet.com 2019-03-07


4

For a reason entirely different from that of physics, there are other fields with much more strict alphas when they engage in hypothesis testing. Genetic Epidemiology is among them, especially when they use "GWAS" (Genome-Wide Association Study) to look at various genetic markers for disease.

Because a GWAS study is a massive exercise in multiple hypothesis testing, the state-of-the-art analysis techniques are all built around much more strict alphas than 0.05. Other such "candidate screening" study techniques that follow in the wake of the genomics studies will likely do the same.


2
These are only tiny local αs. GWAS have still a overall type I error of 5% for claiming a success that isn't there in reality.
Horst Grünbusch

Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.
Licensed under cc by-sa 3.0 with attribution required.