Queens University at Kingston

HyperMetricsNotes

multiple File 7
[multiple Contents] [Previous File] [Next File]


G. Formulating and Testing Multiple Hypotheses

So far we have limited ourselves to tests of single restrictions on the LRM. For example the null hypothesis $H_0: \beta_2=0$ places on restriction on the possible values of the population parameters. But often we would like to test multiple hypotheses together. For example,
$$\eqalign{H_0: &\beta_2 = 0\cr&\beta_3=0\cr&\beta_4=0\cr}$$
Notice that this null hypothesis places three simultaneous restriction on the population parameters. One might think to test each of the restrictions with a t test and then somehow aggregate the results of those t tests into a decision about all three restrictions. But remember that a classical hypothesis test requires that $\alpha$ , the probability of the Type I error, must be controlled for. That means we must be able to determine the probability of rejecting $H_0$ when it is true. But each of the single t tests ignores the other restrictions, so setting $\alpha$ for each test separately doesn't determine the $\alpha$ for some combination of them. Without control over $\alpha$ it is not possible to determine how strongly the data support or don't support the multiple hypothesis.

We therefore need a single test statistic for a multiple hypothesis. We will restrict attention to linear hypotheses, which are linear restrictions on the values contained in the parameter vector $\beta$ . A linear restriction on $\beta$ takes the form
$$R\beta = c$$
where R is a rxk matrix of constants and c is a r x 1 vector of constants. For example, if k = 5 then the joint hypothesis above could be written:
$$\left[\matrix{0&1&0&0&0\cr 0&0&1&0&0 \cr 0&0&0&1&0\cr}\right] \left[\matrix{\beta_1\cr\beta_2\cr\beta_3\cr\beta_4\cr\beta_5\cr}\right] = \left[\matrix{0\cr 0\cr 0\cr}\right]$$

Logic of F-tests for multiple hypotheses

  1. Compute RSS for the unrestricted model (without imposing $H_0$ ). Denote this $RSS_{UR}$ .
  2. Impose $H_0$ on the OLS estimates, run the resulting regression, and compute RSS. Denote this $RSS_R$ . That is when minimizing the sum of squared residuals (e'e), we will force the estimates $\hat\beta$ to satisfy $R\hat\beta=c$ . Adding a the restriction to the estimates reduces their ability to choose values for $\hat\beta$ to fit the data. It must be the case that $RSS_R\ge RSS_{UR}$ .
  3. Now consider two mutually exclusive states of the world:
    1. $H_0$ is true

      In this world the true parameters actually satisfy the restriction being imposed on the estimated parameters in the restricted model. Imposing a true restriction should not make very much difference in the results. That is, imposing a true restriction should not change the amount of variation in Y attributed to the residual u. $RSS_{UR}$ would not be very different from $RSS_R$ if $H_0$ is true. The only reason why they would differ at all if $H_0$ is true is because allowing $\hat\beta\ne R$ allows the estimates to pick up sampling variation in the relationship between X and Y. On the other hand,

    2. $H_0$ is false, $H_A$ is true

      Imposing a false restriction on the estimated parameters should affect OLS abilitly to fit the Y observations. That is, if $R\beta\ne c$ then this should be would be reflected in $R\hat\beta \ne c$ and the RSS in the restricted and unrestricted models would be much different.

  4. Based on comparing these two possible states, a large difference in the restricted and unrestricted RSS would therefore be evidence against $H_0$ . The ratio ($RSS_R$ -$RSS_{UR}$ )/$RSS_{UR}$ tells us the proportionate change in the amount of variation attributed to variance in u when the restriction is imposed. In fact, under $H_0$ , the difference $RSS_R$ -$RSS_{UR}$ , which is a random variable, is distributed as a chi-squared random variable with r degrees of freedom (the number of linear restrictions), and it is distributed independently of the unrestricted RSS when $H_0$ is true. The intuition for this result is simply that the change in the RSS is due to explaining the sampling variation if the restriction is indeed true. And the sampling variation is independent of the model (by assumption u is drawn independently of everything else going on in the LRM).
  5. Because the numerator and denominator are independent under $H_0$ , the expected value of the ratio is the ratio of expected values. The mean of a chi-squared random variable equals its degrees of freedom. Therefore, the ratio given above would have a mean equal to r/(N-k) under $H_O$ . It makes sense to normalize the ratio so that is has a mean of 1 under $H_0$ , which leads to test statistics for linear hypotheses$$F = { {RSS_R-RSS_{UR}\over r }\over {RSS_{UR}\over N-k} }$$
    Under $H_0$ , F follows the F(r,N-k) distribution. Therefore, we should reject $H_0$ if $F \ge F^\star_\alpha(r,N-k)$ . The alternative is simply that the restriction is not true:
    $$H_A: R\beta \ne c$$
    This is inherently a two-sided test, because we are not specifying which equations in $H_A$ has a greater-than sign and which equations have a less-than sign.

Three Special Cases of F Tests

  1. r=1

    When the restriction matrix R contains one row (r=1), then the test statistic is F(1,N-k). A chi-square random variable with one degree of freedom is simply a standard normal Z variable squared. So F(1,N-k) can be written $ Z^2 / C/(N-k) $ , where C is a chi-squared with N-k degrees of freedom. But this expression is exactly the square of a t variable with N-k degrees of freedom. Hence a t-test is a special case of the more general F-test.

  2. $H_0: \beta_2=\beta_3=\dots=\beta_k=0$ This particular hypothesis (with r=k-1 restrictions) is called the Consider the hypothesis test for overall significance. How would you interpret this hypothesis? Under this $H_0$ , the linear regression model would reduce to $Y = \beta_1 + u$ . No X variables show up in the PRE. So this hypothesis is the hypothesis that none of the X variables are statistically related to Y. The alternative is that at least one of the X variables is related to Y, although which ones that might be is not specified.

    Notice that the restricted model simply includes a constant. The OLS estimate of $\beta_1$ would in the restricted model be simply $\hat\beta_1=\bar Y$ . The $RSS_R$ would then simply be TSS! The restricted unexplained variation is simply the total variation in Y around its sample mean. Recall that TSS = ESS + RSS, so $RSS_R$ - $RSS_{UR}$ equals TSS - $RSS_R$ , which equals $ESS_{UR}$ . So the test statistic for a test of overall significance reduces to
    $$ F = { {ESS \over k-1} \over {RSS_{UR}\over N-k} }$$
    which is simply the ratio of the two MS (mean-squared) entries in the analysis of variance table. The F statistic reported in the Stata output table is exactly this number.

  3. Test for overall significance and k=2

    We can go even further here. If we have a simple LRM (k=2), then the test for overall significance collapses to $H_0: \beta_2 = 0$ , which we know can also be tested with the t test $\hat\beta / \hat{se}(\hat\beta_2)$ . But the F version of the test is F(1,N-k), which we know from example 1 is the square of a t test. And, indeed, the overall F statistic is exactly equal to the the square of the t statistic for $\hat\beta_2$ in a simple LRM. You should confirm this fact.

Example of specifying a joint hypothesis test

We return to the example started earlier concerning sexual behavior and religious background.

. * Let's test whether expected age of first intercourse
. * differs significantly by religious background
. test none cath oth

 ( 1)  none = 0.0
 ( 2)  cath = 0.0
 ( 3)  oth = 0.0

       F(  3,  5590) =   31.10
            Prob > F =    0.0000
. * Conclusion:  We can reject the hypothesis that
. *   expected age of first intercourse is the same for
. *   all religious backgrounds
. *
. *  Let's perform the F test directly rather than using "test"
. *  The regress above is the Unrestricted Model
. qui regress
. di "Unrestricted RSS = " _result(4)
Unrestricted RSS = 27598.636
. local u = _result(4)
. *  The Restricted Model imposes H0 on the estimates, which
. *  in this case means none cath oth all have zero coefficients
. *  So the Unrestricted model is
. regress age

  Source |       SS       df       MS                  Number of obs =    5594
---------+------------------------------               F(  0,  5593) =       .
   Model |        0.00     0           .               Prob > F      =       .
Residual |  28059.2594  5593   5.0168531               R-squared     =  0.0000
---------+------------------------------               Adj R-squared =  0.0000
   Total |  28059.2594  5593   5.0168531               Root MSE      =  2.2398

------------------------------------------------------------------------------
     age |      Coef.   Std. Err.       t     P>|t|       [95 Conf. Interval]
---------+--------------------------------------------------------------------
   _cons |   17.42063   .0299471    581.714   0.000       17.36192    17.47934
------------------------------------------------------------------------------

. di "The Restricted RSS = " _result(4)
The Restricted RSS = 28059.259
. local r = _result(4)
. di "So the F statistic for the test is " ((`r'-`u')/3)/(`u'/(5594-4))
So the F statistic for the test is 31.099197
. * Notice that this is the F statistic reported by test, and
. * since this is a test of overall significance it is also
. * equal to the F statistic reported in the regression output

[multiple Contents] [Next File] [Top of File]

This document was created using HTX, a (HTML/TeX) interlacing program written by Chris Ferrall.
Document Last revised: 1997/1/5