# Na regressão linear simples, de onde vem a fórmula para a variação dos resíduos?

21

De acordo com um texto que estou usando, a fórmula para a variação do residual é dada por:${i}^{th}$$i^{th}$

${\sigma }^{2}\left(1-\frac{1}{n}-\frac{\left({x}_{i}-\overline{x}{\right)}^{2}}{{S}_{xx}}\right)$$\sigma^2\left ( 1-\frac{1}{n}-\frac{(x_{i}-\overline{x})^2}{S_{xx}} \right )$

Eu acho isso difícil de acreditar desde o residual é a diferença entre o valor observado e o valor equipada; se alguém calculasse a variação da diferença, pelo menos eu esperaria algumas "vantagens" na expressão resultante. Qualquer ajuda para entender a derivação seria apreciada.${i}^{th}$$i^{th}$${i}^{th}$$i^{th}$${i}^{th}$$i^{th}$

É possível que alguns sinais " $+$$+$ " no texto estejam sendo mal interpretados (ou mal interpretados) como sinais " $-$$-$ "?
whuber

Eu tinha pensado nisso, mas aconteceu duas vezes no texto (2 capítulos diferentes), então achei improvável. Obviamente, uma derivação da fórmula ajudaria! :)
Eric

Os negativos são resultado da correlação positiva entre uma observação e seu valor ajustado, o que reduz a variação da diferença.
Glen_b -Reinstate Monica

@Glen Obrigado por explicar por que a fórmula faz sentido, juntamente com a derivação da matriz abaixo.
Eric

Respostas:

27

A intuição sobre os sinais "mais" relacionados à variância (do fato de que mesmo quando calculamos a variação de uma diferença de variáveis ​​aleatórias independentes, adicionamos suas variações) é correta, mas fatalmente incompleta: se as variáveis ​​aleatórias envolvidas não forem independentes , as covariâncias também estão envolvidas e as covariâncias podem ser negativas. Existe uma expressão que é quase como a expressão em questão foi pensado que ele "deveria" ser pelo OP (e me), e é a variância da previsão de erro , denotam que ${e}^{0}={y}^{0}-{\stackrel{^}{y}}^{0}$$e^0 = y^0 - \hat y^0$ , onde ${y}^{0}={\beta }_{0}+{\beta }_{1}{x}^{0}+{u}^{0}$$y^0 = \beta_0+\beta_1x^0+u^0$ :

$\text{Var}\left({e}^{0}\right)={\sigma }^{2}\cdot \left(1+\frac{1}{n}+\frac{\left({x}^{0}-\overline{x}{\right)}^{2}}{{S}_{xx}}\right)$

A diferença crítica entre a variação do erro de previsão e a variação do erro de estimativa (ou seja, do residual) é que o termo de erro da observação prevista não está correlacionado com o estimador , uma vez que o valor não foi utilizado na construção o estimador e o cálculo das estimativas, sendo um valor fora da amostra.${y}^{0}$$y^0$

A álgebra para ambos prossegue exatamente da mesma maneira até um ponto (usando vez de ), mas diverge. Especificamente:${}^{0}$$^0$${}_{i}$$_i$

No regressão linear simples , , a variância do estimador ainda está${y}_{i}={\beta }_{0}+{\beta }_{1}{x}_{i}+{u}_{i}$$y_i = \beta_0 + \beta_1x_i + u_i$$\text{Var}\left({u}_{i}\right)={\sigma }^{2}$$\text{Var}(u_i)=\sigma^2$$\stackrel{^}{\beta }=\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1}{\right)}^{\prime }$$\hat \beta = (\hat \beta_0, \hat \beta_1)'$

$\text{Var}\left(\stackrel{^}{\beta }\right)={\sigma }^{2}{\left({\mathbf{X}}^{\prime }\mathbf{X}\right)}^{-1}$

Nós temos

${\mathbf{X}}^{\prime }\mathbf{X}=\left[\begin{array}{cc}n& \sum {x}_{i}\\ \sum {x}_{i}& \sum {x}_{i}^{2}\end{array}\right]$

e entao

${\left({\mathbf{X}}^{\prime }\mathbf{X}\right)}^{-1}=\left[\begin{array}{cc}\sum {x}_{i}^{2}& -\sum {x}_{i}\\ -\sum {x}_{i}& n\end{array}\right]\cdot {\left[n\sum {x}_{i}^{2}-{\left(\sum {x}_{i}\right)}^{2}\right]}^{-1}$

Nós temos

$\left[n\sum {x}_{i}^{2}-{\left(\sum {x}_{i}\right)}^{2}\right]=\left[n\sum {x}_{i}^{2}-{n}^{2}{\overline{x}}^{2}\right]=n\left[\sum {x}_{i}^{2}-n{\overline{x}}^{2}\right]\phantom{\rule{0ex}{0ex}}=n\sum \left({x}_{i}^{2}-{\overline{x}}^{2}\right)\equiv n{S}_{xx}$

So

${\left({\mathbf{X}}^{\prime }\mathbf{X}\right)}^{-1}=\left[\begin{array}{cc}\left(1/n\right)\sum {x}_{i}^{2}& -\overline{x}\\ -\overline{x}& 1\end{array}\right]\cdot \left(1/{S}_{xx}\right)$

o que significa que

$\text{Var}\left({\stackrel{^}{\beta }}_{1}\right)={\sigma }^{2}\left(1/{S}_{xx}\right)$

$\text{Cov}\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1}\right)=-{\sigma }^{2}\left(\overline{x}/{S}_{xx}\right)$

The $i$$i$-th residual is defined as

${\stackrel{^}{u}}_{i}={y}_{i}-{\stackrel{^}{y}}_{i}=\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}+{u}_{i}$

The actual coefficients are treated as constants, the regressor is fixed (or conditional on it), and has zero covariance with the error term, but the estimators are correlated with the error term, because the estimators contain the dependent variable, and the dependent variable contains the error term. So we have

$\text{Var}\left({\stackrel{^}{u}}_{i}\right)=\left[\text{Var}\left({u}_{i}\right)+\text{Var}\left({\stackrel{^}{\beta }}_{0}\right)+{x}_{i}^{2}\text{Var}\left({\stackrel{^}{\beta }}_{1}\right)+2{x}_{i}\text{Cov}\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1}\right)\right]+2\text{Cov}\left(\left[\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}\right],{u}_{i}\right)$

$=\left[{\sigma }^{2}+{\sigma }^{2}\left(\frac{1}{n}+\frac{{\overline{x}}^{2}}{{S}_{xx}}\right)+{x}_{i}^{2}{\sigma }^{2}\left(1/{S}_{xx}\right)+2\text{Cov}\left(\left[\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}\right],{u}_{i}\right)$

Pack it up a bit to obtain

$\text{Var}\left({\stackrel{^}{u}}_{i}\right)=\left[{\sigma }^{2}\cdot \left(1+\frac{1}{n}+\frac{\left({x}_{i}-\overline{x}{\right)}^{2}}{{S}_{xx}}\right)\right]+2\text{Cov}\left(\left[\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}\right],{u}_{i}\right)$

The term in the big parenthesis has exactly the same structure with the variance of the prediction error, with the only change being that instead of ${x}_{i}$$x_i$ we will have ${x}^{0}$$x^0$ (and the variance will be that of ${e}^{0}$$e^0$ and not of ${\stackrel{^}{u}}_{i}$$\hat u_i$). The last covariance term is zero for the prediction error because ${y}^{0}$$y^0$ and hence ${u}^{0}$$u^0$ is not included in the estimators, but not zero for the estimation error because ${y}_{i}$$y_i$ and hence ${u}_{i}$$u_i$ is part of the sample and so it is included in the estimator. We have

$2\text{Cov}\left(\left[\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}\right],{u}_{i}\right)=2E\left(\left[\left({\beta }_{0}-{\stackrel{^}{\beta }}_{0}\right)+\left({\beta }_{1}-{\stackrel{^}{\beta }}_{1}\right){x}_{i}\right]{u}_{i}\right)$

$=-2E\left({\stackrel{^}{\beta }}_{0}{u}_{i}\right)-2{x}_{i}E\left({\stackrel{^}{\beta }}_{1}{u}_{i}\right)=-2E\left(\left[\overline{y}-{\stackrel{^}{\beta }}_{1}\overline{x}\right]{u}_{i}\right)-2{x}_{i}E\left({\stackrel{^}{\beta }}_{1}{u}_{i}\right)$

the last substitution from how ${\stackrel{^}{\beta }}_{0}$$\hat \beta_0$ is calculated. Continuing,

$...=-2E\left(\overline{y}{u}_{i}\right)-2\left({x}_{i}-\overline{x}\right)E\left({\stackrel{^}{\beta }}_{1}{u}_{i}\right)=-2\frac{{\sigma }^{2}}{n}-2\left({x}_{i}-\overline{x}\right)E\left[\frac{\sum \left({x}_{i}-\overline{x}\right)\left({y}_{i}-\overline{y}\right)}{{S}_{xx}}{u}_{i}\right]$

$=-2\frac{{\sigma }^{2}}{n}-2\frac{\left({x}_{i}-\overline{x}\right)}{{S}_{xx}}\left[\sum \left({x}_{i}-\overline{x}\right)E\left({y}_{i}{u}_{i}-\overline{y}{u}_{i}\right)\right]$

$=-2\frac{{\sigma }^{2}}{n}-2\frac{\left({x}_{i}-\overline{x}\right)}{{S}_{xx}}\left[-\frac{{\sigma }^{2}}{n}\sum _{j\ne i}\left({x}_{j}-\overline{x}\right)+\left({x}_{i}-\overline{x}\right){\sigma }^{2}\left(1-\frac{1}{n}\right)\right]$

$=-2\frac{{\sigma }^{2}}{n}-2\frac{\left({x}_{i}-\overline{x}\right)}{{S}_{xx}}\left[-\frac{{\sigma }^{2}}{n}\sum \left({x}_{i}-\overline{x}\right)+\left({x}_{i}-\overline{x}\right){\sigma }^{2}\right]$

$=-2\frac{{\sigma }^{2}}{n}-2\frac{\left({x}_{i}-\overline{x}\right)}{{S}_{xx}}\left[0+\left({x}_{i}-\overline{x}\right){\sigma }^{2}\right]=-2\frac{{\sigma }^{2}}{n}-2{\sigma }^{2}\frac{\left({x}_{i}-\overline{x}{\right)}^{2}}{{S}_{xx}}$

Inserting this into the expression for the variance of the residual, we obtain

$\text{Var}\left({\stackrel{^}{u}}_{i}\right)={\sigma }^{2}\cdot \left(1-\frac{1}{n}-\frac{\left({x}_{i}-\overline{x}{\right)}^{2}}{{S}_{xx}}\right)$

So hats off to the text the OP is using.

(I have skipped some algebraic manipulations, no wonder OLS algebra is taught less and less these days...)

SOME INTUITION

So it appears that what works "against" us (larger variance) when predicting, works "for us" (lower variance) when estimating. This is a good starting point for one to ponder why an excellent fit may be a bad sign for the prediction abilities of the model (however counter-intuitive this may sound...).
The fact that we are estimating the expected value of the regressor, decreases the variance by $1/n$$1/n$. Why? because by estimating, we "close our eyes" to some error-variability existing in the sample,since we essentially estimating an expected value. Moreover, the larger the deviation of an observation of a regressor from the regressor's sample mean, the smaller the variance of the residual associated with this observation will be... the more deviant the observation, the less deviant its residual... It is variability of the regressors that works for us, by "taking the place" of the unknown error-variability.

But that's good for estimation. For prediction, the same things turn against us: now, by not taking into account, however imperfectly, the variability in ${y}^{0}$$y^0$ (since we want to predict it), our imperfect estimators obtained from the sample show their weaknesses: we estimated the sample mean, we don't know the true expected value -the variance increases. We have an ${x}^{0}$$x^0$ that is far away from the sample mean as calculated from the other observations -too bad, our prediction error variance gets another boost, because the predicted ${\stackrel{^}{y}}^{0}$$\hat y^0$ will tend to go astray... in more scientific language "optimal predictors in the sense of reduced prediction error variance, represent a shrinkage towards the mean of the variable under prediction". We do not try to replicate the dependent variable's variability -we just try to stay "close to the average".

Thank you for a very clear answer! I'm glad that my "intuition" was correct.
Eric

Alecos, I really don't think this is right.
Glen_b -Reinstate Monica

@Alecos the mistake is in taking the parameter estimates to be uncorrelated with the error term. This part: $\text{Var}\left({\stackrel{^}{u}}_{i}\right)=\text{Var}\left({u}_{i}\right)+\text{Var}\left({\stackrel{^}{\beta }}_{0}\right)+{x}_{i}^{2}\text{Var}\left({\stackrel{^}{\beta }}_{1}\right)+2{x}_{i}\text{Cov}\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1}\right)$$\text{Var}(\hat u_i) = \text{Var}(u_i)+\text{Var}(\hat \beta_0)+x_i^2\text{Var}(\hat \beta_1)+2x_i\text{Cov}(\hat \beta_0,\hat \beta_1)$ isn't right.
Glen_b -Reinstate Monica

@Eric I apologize for misleading you earlier. I have tried to provide some intuition for both formulas.

+1 You can see why I did the multiple regression case for this... thanks for going to the extra effort of doing the simple-regression case.
Glen_b -Reinstate Monica

19

Sorry for the somewhat terse answer, perhaps overly-abstract and lacking a desirable amount of intuitive exposition, but I'll try to come back and add a few more details later. At least it's short.

Given $H=X\left({X}^{T}X{\right)}^{-1}{X}^{T}$$H=X(X^TX)^{-1}X^T$,

$\begin{array}{rcl}\text{Var}\left(y-\stackrel{^}{y}\right)& =& \text{Var}\left(\left(I-H\right)y\right)\\ & =& \left(I-H\right)\text{Var}\left(y\right)\left(I-H{\right)}^{T}\\ & =& {\sigma }^{2}\left(I-H{\right)}^{2}\\ & =& {\sigma }^{2}\left(I-H\right)\end{array}$

Hence

$\text{Var}\left({y}_{i}-{\stackrel{^}{y}}_{i}\right)={\sigma }^{2}\left(1-{h}_{ii}\right)$

In the case of simple linear regression ... this gives the answer in your question.

This answer also makes sense: since ${\stackrel{^}{y}}_{i}$$\hat{y}_i$ is positively correlated with ${y}_{i}$$y_i$, the variance of the difference should be smaller than the sum of the variances.

--

Edit: Explanation of why $\left(I-H\right)$$(I-H)$ is idempotent.

(i) $H$$H$ is idempotent:

${H}^{2}=X\left({X}^{T}X{\right)}^{-1}{X}^{T}X\left({X}^{T}X{\right)}^{-1}{X}^{T}$$H^2=X(X^TX)^{-1}X^TX(X^TX)^{-1}X^T$ $= X\ [(X^TX)^{-1}X^TX]\ (X^TX)^{-1}X^T=X(X^TX)^{-1}X^T=H$

(ii) $\left(I-H{\right)}^{2}={I}^{2}-IH-HI+{H}^{2}=I-2H+H=I-H$$(I-H)^2= I^2-IH-HI+H^2=I-2H+H=I-H$

1
This is a very nice derivation for its simplicity, although one step that is not clear to me is why $\left(I-H{\right)}^{2}=\left(I-H\right)$$(I-H)^2=(I-H)$. Maybe when you expand on your answer a little, as you're planning to do anyway, you could say a little something about that?
Jake Westfall

@Jake Added a couple of lines at the end
Glen_b -Reinstate Monica
Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.