Last Time in Math 106

  • OLS estimates can be rewritten in terms of the response.
  • the expected value of \(\hat{\beta}_0\) and \(\hat{\beta}_1\) are \(\beta_0\) and \(\beta_1\), respectively.
  • the expected value when \(X = \bar{x}\) is \(\bar{y}\).

Variance of \(\beta\)s

We have shown that \[ \text{Var}\left( \hat{\beta}_1 | X \right) = \sigma^2 \frac{1}{\texttt{SXX}} \] and we will show that \[ \text{Var}\left( \hat{\beta}_0 | X \right) = \sigma^2 \left(\frac{1}{n} + \frac{\bar{x}^2}{\texttt{SXX}}\right) \]

Variance Properties

The general form for the variance of a linear combination of random variables is given by \[ \text{Var}\left( a_0 + \sum a_i u_i \right) = \sum_{i=1}^n a_i^2 \text{Var}(u_i) + 2 \sum^{n-1}_{i=1} \sum^{n}_{j=i+1} a_i a_j \text{Cov}(u_i, u_j). \] When the variables are uncorrelated, then this simplifies to the previously discussed expression.

Covariance

The covariance of two random variables \(u_i\), \(u_j\) is defined as \[ \text{Cov}\left( u_i , u_j \right) = \text{E}\left\{ [u_i - \text{E}(u_i)][u_j - \text{E}(u_j)] \right\} = \text{Cov}\left(u_j, u_i \right) \]

Note: When \(u_i = u_j\), this simplifies to \(\text{Var}(u_i)\).

Q: What’s another way to think about covariance?

Variance of \(\hat{\beta}_0\)

\[ \begin{align*} \text{Var}\left( \hat{\beta}_0 | X \right) &= \text{Var}\left( \bar{y} - \hat{\beta}_1 \bar{x} | X = x_i \right) \\ &= \text{Var}(\bar{y} | X) + \bar{x}^2 \text{Var} \left( \hat{\beta}_1 | X \right) - 2 \bar{x} \text{ Cov}(\bar{y}, \hat{\beta}_1 | X) \\ %\onslide<3->{ &= \frac{\sigma^2}{n} + \bar{x}^2 \left(\frac{\sigma^2}{\texttt{SXX}} \right) - 0 \\ } %\onslide<4>{ &= \sigma^2 \left( \frac{1}{n} + \frac{\bar{x}^2}{\texttt{SXX}} \right)\\} \end{align*} \]

Class Activity: With a partner, show the above simplifies to \(\sigma^2 \left(\frac{1}{n} + \frac{\bar{x}^2}{\texttt{SXX}}\right)\)

Covariance of the \(\beta\)s

We know that \(\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}\), so \[ \begin{align*} \text{Cov}\left( \hat{\beta}_0, \hat{\beta}_1 | X\right) &= \text{Cov}\left( \bar{y} - \hat{\beta}_1 \bar{x} , \hat{\beta}_1 | X \right)\\ % &= \text{Cov}(\bar{y}, \hat{\beta}_1 | X) - \bar{x} \text{Var}( \hat{\beta}_1 | X)\\ &= - \sigma \frac{\bar{x}}{\texttt{SXX}} \end{align*} \]

Based on our previous assumptions about the error, the Gauss-Markov theorem states that the OLS estimates have the smallest variance and are called the best linear unbiased estimates (BLUE).

Estimated Variance

Since we will not necessarily know the population variance \(\sigma\), we will use \(\hat{\sigma}\),

\[ \widehat{\text{Var}} \left( \hat{\beta}_1 | X \right) = \hat{\sigma}^2 \frac{1}{\texttt{SXX}}, \qquad \widehat{\text{Var}} \left( \hat{\beta}_0 | X \right) = \hat{\sigma}^2 \left( \frac{1}{n} + \frac{\bar{x}^2}{\texttt{SXX}}\right), \]

and the standard error, se, is then \[ \text{se}\left( \hat{\beta}_1 | X \right) = \sqrt{ \widehat{\text{Var}} \left( \hat{\beta}_1 | X \right) }. \]

Note: Standard error refers to the square root of the estimated var of a statistic & estimated standard deviations refer to variability between values of a r.v (random variable).

Confidence Intervals and \(t\)-Tests

Q: How do you create a \((1-\alpha) \times 100 \%\) confidence interval?

For a \(t\)-distribution, we will use \(t (\alpha/2 , df ) = t(\alpha/2, n-2)\). Hence, we have

\[ \hat{\beta}_0 - t\left(\tfrac{\alpha}{2}, n-2\right) \text{se}\left(\hat{\beta}_0 | X \right) \le \beta_0 \le \hat{\beta}_0 + t\left(\tfrac{\alpha}{2}, n-2\right) \text{se}\left(\hat{\beta}_0 | X \right) \]

This will be the same for the slope!

Hypothesis Testing

For the hypothesis test,

\[ \begin{align*} H_0: \quad \beta_0 &= \beta_0^*,\; \beta_1 \text{ arbitrary}\\ H_a: \quad \beta_0 &\neq \beta_0^*,\; \beta_1 \text{ arbitrary}, \end{align*} \]

Calculating the \(t\)-statistic will be similar as before, \[ t = \frac{\hat{\beta}_0 - \beta_0^*}{\text{se}\left(\hat{\beta}_0 | X \right)}. \]

Q: How would you set up (and interpret) the hypothesis for the slope?