for Standard Test Method for Compressive Strength of Cylindr
for Standard Test Method for Compressive Strength of Cylindrical Concrete Specimens, 1) describe 3 possible sources of error, 2) describe the accuracy of the results 3) account for the accuracy of the results using 3 or more sources of error
Solution
This test method covers determination of compressive strength of cylindrical concrete specimens such as molded cylinders and drilled cores. The values stated in either inch-pound or SI units are to be regarded separately as standard. The SI units are shown in brackets. The values stated in each system may not be exact equivalents; therefore, each system shall be used independently of the other. Combining values from the two systems may result in nonconformance with the standard.
This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Three Possible sources of error-
Major sources of measurement uncertainty can be grouped into the following categories.
1.Static Calibration Uncertainty Calibration –
The set of operations which establish, under specified conditions, the relationship between values indicated by a measuring instrument or measuring system, or values represented by a material measure or a reference material, and the corresponding values of a quantity realized by a reference standard.The term Calibration has often been associated with the act of making adjustments. When in fact the calibration process provides information so that adequate adjustments can be made if required. Performing a calibration does not always require an adjustment.
2.Testing Machine Uncertainty During Use-
The measurement uncertainty contributors in the section have the potential of making all calibration sources of measurement uncertainty insignificant. Keep in mind that sources of uncertainty are combined in the RSS method. This results in added weight for the major contributors. A material testing machine ill suited for a particular test can contribute errors in force measurement and application well in excess of all other combined uncertainties related to the testing machine’s use.
3.Force Measuring and Application System Effects
Drift during use can be related to the system’s inability to control well. The system may need to wonder off the target force by a relatively large amount before the control signal makes an adjustment to correct the system. Drift can also occur due to system devices changing relative to temperature. Creep in the system’s force transducer can add to this uncertainty value as well. Noise can occur in the form of mechanical noise, electrical noise, or both. Mechanical noise to some degree is almost always present when the system is running. In modern well designed and controlled systems operating with in normal operating ranges of measurement and control, the total noise due to electrical and mechanical influences may be ±0.1% or less.
4.Environment-
Temperature during a test may be significantly different than the temperature during calibration. An evaluation of the effect of this difference should be performed. Where applicable, test results may be corrected due to temperature differences. Temperature changes during the test may also affect the test results. These gradients should be known and included in the uncertainty analysis. A typical load cell temperature coefficients specification.(13) Effect on Output - %/ºC Maximum: ±0.0015
Account for accuracy-
1.The percentage of error for the loads within the proposed range of use of the testing machine shall not exceed 1.0 % of the indicated load.
2.The accuracy of the testing machine shall be verified by applying five test loads in four approximately equal increments in ascending order. The difference between any two successive test loads shall not exceed one third of the difference between the maximum and minimum test loads.
3.The test load as indicated by the testing machine and the applied load computed from the readings of the verification device shall be recorded at each test pointt. Calculate the error, E, and the percentage of error, Ep, for each point from these data as follows:
E= A-B
Ep=100(A-B)/B
where: A = load, lbf [kN] indicated by the machine being verified, and B = applied load, lbf [kN] as determined by the calibrating device.
4.The report on the verification of a testing machine shall state within what loading range it was found to conform to specification requirements rather than reporting a blanket acceptance or rejection. In no case shall the loading range be stated as including loads below the value which is 100 times the smallest change of load estimable on the load-indicating mechanism of the testing machine or loads within that portion of the range below 10 % of the maximum range capacity.
5.The indicated load of a testing machine shall not be corrected either by calculation or by the use of a calibration diagram to obtain values within the required permissible variation.

