Two methods have been developed to determine the nickel cont
Two methods have been developed to determine the nickel content of steel. In a sample of five replications of the first method on a certain kind of steel, the average measurement was 3.16% with a standard deviation of 0.042%. The average of seven replications of the second method was 3.24% and a standard deviation of 0.048%. Assume that it is known that the population variances are equal. Using hypothesis testing, can we conclude that there is a difference in the mean level of nickel by these 2 measurements at the 0.01 level of significance?
Solution
Let mu1 be the mean for first method
Let mu2 be the mean for second method
The test hypothesis:
Ho: mu1=mu2 (i.e. null hypothesis)
Ha: mu1 not equal to mu2 (i.e. alternative hypothesis)
The test statistic is
t=(xbar1-xbar2)/sqrt(s1^2/n1+s2^2/n2)
=(3.16-3.24)/sqrt(0.042^2/5+0.048^2/7)
=-3.06
The degree of freedom=n1+n2-2=5+7-2= 10
It is a two-tailed test.
Given a=0.01, the critical values are t(0.005, df=10) =-3.169 or 3.169 (from student t table)
The rejection regions are if t<-3.169 or t>3.169, we reject the null hypothesis.
Since t=-3.06 is between -3.169 and 3.169, we do not reject the null hypothesis.
So we can not conclude that there is a difference in the mean level of nickel by these 2 measurements
