Consider the linear regression model y X beta epsilon wher
Solution
Linear Regression Model:
Y = X +
Where: E[/X] = 0
Var [/X] = 2 In
A = (X’ X)-1 X’ OLS is given by = Ay
Gauss-Markov Theorem – Proof:
Suppose = Cy be another linear estimator of and let C be given by, (X\'X)^{-1}X\' + D where D is a k x n non-zero matrix. As we\'re restricting to unbiased estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of , the OLS estimator.
The expectation of is:
Since DD\' is a positive semi definite matrix, exceeds by a positive semi definite matrix.
The Gauss–Markov theorem states that under the spherical errors assumption (that is, the errors should be uncorrelated and homoscedastic) the estimator is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator which would be linear in y and unbiased.
Then
Var [/X] - Var [/X] 0
![Consider the linear regression model y = X beta + epsilon where E[epsilon|X] = 0, Var[epsilon|X] = sigma^2 I_n. Denote A = (X\'X)^-1 X\'; we know that the OLS Consider the linear regression model y = X beta + epsilon where E[epsilon|X] = 0, Var[epsilon|X] = sigma^2 I_n. Denote A = (X\'X)^-1 X\'; we know that the OLS](/WebImages/19/consider-the-linear-regression-model-y-x-beta-epsilon-wher-1041839-1761541289-0.webp)