Consider the following multiple linear regression model yi

Consider the following multiple linear regression model

yi = 0 + 1xi1 + 2xi2 + ui, i = 1,2,···,n, where E(u|x1,x2) 6= 0. Can this equation’s parameters be estimated by OLS? If the OLS estimators can be computed, are they BLUE?

Solution

For a linear model the OLS solution provides the best linear unbiased estimator for the parameters.

Of course we can trade in a bias for lower variance, e.g. ridge regression. But my question is regarding having no bias. Are there any other estimators that are somewhat commonly used, which are unbiased but with a higher variance than the OLS estimated parameters?

If I would have a huge data set I could of course sub-sample it and estimate the parameters with less data, and increase the variance. I assume this could be hypothetically useful.

This is more of a rhetorical question, because when I have read about BLUE estimators, a worse alternative is not provided. I guess that providing worse alternatives could also help people understand the power of BLUE estimators better.

Consider the case of a regression of yiyi, i=1,…,ni=1,…,n on a constant for illustration (readily generalizes to general GLS estimators). Here, {yi}{yi} is assumed to be a random sample from a population with mean and variance 22.

Then, we know that OLS is just ^=y¯^=y¯, the sample mean. To emphasize the point that each observation is weighted with weight 1/n1/n, write this as

^=i=1n1nyi.^=i=1n1nyi.

It is well-known that Var(^)=2/nVar(^)=2/n.

Now, consider another estimator which can be written as

~=i=1nwiyi,~=i=1nwiyi,

where the weights are such that iwi=1iwi=1. This ensures that the estimator is unbiased, as

E(i=1nwiyi)=i=1nwiE(yi)=i=1nwi=.E(i=1nwiyi)=i=1nwiE(yi)=i=1nwi=.

Its variance will exceed that of OLS unless wi=1/nwi=1/n for all ii (in which case it will of course reduce to OLS), which can for instance be shown via a Lagrangian:

L=V(~)(iwi1)=iw2i2(iwi1),L=V(~)(iwi1)=iwi22(iwi1),

with partial derivatives w.r.t. wiwi set to zero being equal to 22wi=022wi=0 for all ii, and L/=0L/=0 equaling iwi1=0iwi1=0. Solving the first set of derivatives for and equating them yields wi=wjwi=wj, which implies wi=1/nwi=1/n minimizes the variance, by the requirement that the weights sum to one.

Consider the following multiple linear regression model yi = 0 + 1xi1 + 2xi2 + ui, i = 1,2,···,n, where E(u|x1,x2) 6= 0. Can this equation’s parameters be estim

Get Help Now

Submit a Take Down Notice

Tutor
Tutor: Dr Jack
Most rated tutor on our site