Distribuições diferentes da normal, onde média e variância são independentes


32

Fiquei me perguntando se existem distribuições além do normal, onde a média e a variação são independentes uma da outra (ou em outras palavras, onde a variação não é uma função da média).


1
Não tenho certeza se entendi a pergunta corretamente. Você está perguntando se existem distribuições além do normal que são completamente especificadas pela média e pela variação? Em certo sentido, a variação é uma função da média, pois é uma medida da dispersão em torno da média, mas acho que não é isso que você tem em mente.

você quer dizer a média da amostra variância da amostra1X¯=1ni=1nXisão independentes. Boa pergunta ! talvez projetar uma variável aleatória gaussiana manterá a independência? 1ni=1n(XiX¯)2
Robin girard

4
Srikant está certo. Se a pergunta for sobre "média e variação da amostra", a resposta será "não". Se a pergunta é sobre média e variação populacional, a resposta é sim; David dá bons exemplos abaixo.

1
Just to clarify, what I meant is this. For the normal distribution, the mean μ and the variance σ2 fully characterizes the distribution and σ2 is not a function of μ. For many other distributions, this is not so. For example, for the binomial distribution, we have the mean π and the variance nπ(1π), so the variance is a function of the mean. Other examples are the gamma distribution with parameters θ (scale) and κ (shape), where the mean is μ=κθe a variação é , então a variação é na verdade μ θ . κtheta2μθ
Wolfgang

7
Please consider modifying your question, then, because the response you checked as your preferred answer does not answer the question as it stands (and the other one does). Currently you are using the word "independent" in an idiosyncratic way. Your example with Gamma shows this: one could simply reparameterize Gamma in terms of the mean (mu) and variance (sigma), because we can recover theta = sigma/mu and kappa = mu^2/sigma. In other words, functional "independence" of the parameters is usually meaningless (except for single-parameter families).
whuber

Respostas:


11

Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.

A quick scan of Appendix A in Carlin and Lewis (1996) provides the following distributions which are similar in this regard to the normal, in that the same distribution parameters are not used in the calculations of the mean and variance. As pointed out by @robin, when calculating parameter estimates from a sample, the sample mean is required to calculate sigma.

Multivariate Normal

E(X)=μ
Var(X)=Σ

t and multivariate t:

E(X)=μ
Var(X)=νσ2/(ν2)

Double exponential:

E(X)=μ
Var(X)=2σ2

Cauchy: With some qualification it could be argued that the mean and variance of the Cauchy are not dependent.

E(X) and Var(X) do not exist

Reference

Carlin, Bradley P., and Thomas A. Louis. 1996. Bayes and Empirical bayes Methods for Data Analysis, 2nd ed. Chapman and Hall/CRC, New York


7
In any location-scale family the mean and variance will be functionally independent in this fashion!
whuber

1
David, the double exponential is an excellent example. Thanks! I did not think of that one. The t-distribution is also a good example, but isn't E(X) = 0 and Var(X) = v/(v-2)? Or does Carlin et al. (1996) define a generalized version of the t-distribution that is shifted in its mean and scaled by sigma^2?
Wolfgang

You are correct, the t-distribution appears to be frequently characterized with a mean = 0 and variance = 1, but the general pdf for t provided by Carlin and Louis explicitly includes both sigma and mu; the nu parameter accounts for the difference between the normal and the t.
David LeBauer

27

In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 13, No. 1 (Mar., 1942), pp. 91-93.

I didn't know this, but Feller, "Introduction to Probability Theory and Its Applications, Volume II" (1966, pg 86) says that R.C. Geary proved this, too.


3
@onestop I guess it is an unfortunate artifact of my age. It is not an understatement to say that Feller's books revolutionized how probability was done - worldwide. A large part of our modern notation is due to him. For decades, his books were the probability books to study. Maybe they still should be. BTW: I've added the title for those that haven't heard of his books.

1
I have aske the question about other funy characterisation ... stats.stackexchange.com/questions/4364/…
robin girard

2
Jay, thanks for the reference to the paper by Lukacs, who nicely shows that the sampling distributions of the sample mean and variance are only independent for the normal distribution. As for second central moment, there are some distributions where it is not a function of the first moment (David gave some nice examples).
Wolfgang

1
Geary, R. C. (1936), “The Distribution of ‘Student’s’ Ratio for Non-Normal Samples,” Journal of the Royal Statistical Society, Suppl. 3, 178–184.
vqv
Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.
Licensed under cc by-sa 3.0 with attribution required.