Consider the linear regression model y X beta epsilon wher

Consider the linear regression model y = X beta + epsilon where E[epsilon|X] = 0, Var[epsilon|X] = sigma^2 I_n. Denote A = (X\'X)^-1 X\'; we know that the OLS is given by beta = Ay. A shorter proof of the Gauss-Markov theorem. Let beta = By be an arbitrary estimator of beta such that BX = I_k (i.e. beta is linear and unbiased). Show that Var[beta|X| - Var[beta|X] = sigma^2[BB\' - AA\']. Show that BB\' - AA\' = (B - A)(B - A)\'. Conclude that OLS is the best. Denote by e the residual vector from the regression of y on X. Let e_j be any element of e. Show that Var[e_j]

Solution

Linear Regression Model:

                Y = X +

Where: E[/X] = 0

                Var [/X] = 2 In

                        A = (X’ X)-1 X’ OLS is given by = Ay

Gauss-Markov Theorem – Proof:

Suppose = Cy be another linear estimator of and let C be given by, (X\'X)^{-1}X\' + D where D is a k x n non-zero matrix. As we\'re restricting to unbiased estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of , the OLS estimator.

The expectation of is:

Since DD\' is a positive semi definite matrix, exceeds by a positive semi definite matrix.

                The Gauss–Markov theorem states that under the spherical errors assumption (that is, the errors should be uncorrelated and homoscedastic) the estimator is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator which would be linear in y and unbiased.

Then

Var [/X] - Var [/X] 0

 Consider the linear regression model y = X beta + epsilon where E[epsilon|X] = 0, Var[epsilon|X] = sigma^2 I_n. Denote A = (X\'X)^-1 X\'; we know that the OLS

Get Help Now

Submit a Take Down Notice

Tutor
Tutor: Dr Jack
Most rated tutor on our site