The procedure presented in this practice consists of three basic steps: planning the interlaboratory study, guiding the testing phase of the study, and analyzing the test result data. The analysis utilizes tabular, graphical, and statistical diagnostic tools for evaluating the consistency of the data so that unusual values may be detected and investigated, and also includes the calculation of the numerical measures of precision of the test method pertaining to both within-laboratory repeatability and between-laboratory reproducibility. Tests performed on presumably identical materials in presumably identical circumstances do not, in general, yield identical results. This is attributed to unavoidable random errors inherent in every test procedure; the factors that may influence the outcome of a test cannot all be completely controlled. In the practical interpretation of test data, this inherent variability has to be taken into account. For instance, the difference between a test result and some specified value may be within that which can be expected due to unavoidable random errors, in which case a real deviation from the specified value has not been demonstrated. Similarly, the difference between test results from two batches of material will not indicate a fundamental quality difference if the difference is no more than can be attributed to inherent variability in the test procedure. Many different factors (apart from random variations between supposedly identical specimens) may contribute to the variability in application of a test method, including: a the operator, b equipment used, c calibration of the equipment, and d environment (temperature, humidity, air pollution, etc.). It is considered that changing laboratories changes each of the above factors. The variability between test results obtained by different operators or with different equipment will usually be greater than between test results obtained by a single operator using the same equipment. The variability between test results taken over a long period of time even by the same operator will usually be greater than that obtained over a short period of time because of the greater possibility of changes in each of the above factors, especially the environment.The general term for expressing the closeness of test results to the “true” value or the accepted reference value is accuracy. To be of practical value, standard procedures are required for determining the accuracy of a test method, both in terms of its bias and in terms of its precision. This practice provides a standard procedure for determining the precision of a test method. Precision, when evaluating test methods, is expressed in terms of two measurement concepts, repeatability and reproducibility. Under repeatability conditions the factors listed above are kept or remain reasonably constant and usually contribute only minimally to the variability. Under reproducibility conditions the factors are generally different (that is, they change from laboratory to laboratory) and usually contribute appreciably to the variability of test results. Thus, repeatability and reproducibility are two practical extremes of precision. The repeatability measure, by excluding the factors a through d as contributing variables, is not intended as a mechanism for verifying the ability of a laboratory to maintain“ in-control” conditions for routine operational factors such as operator-to-operator and equipment differences or any effects of longer time intervals between test results. Such a control study is a separate issue for each laboratory to consider for itself, and is not a recommended part of an interlaboratory study.The reproducibility measure (including the factors a through d as sources of variability) reflects what precision might be expected when random portions of a homogeneous sample are sent to random “in-control” laboratories.To obtain reasonable estimates of repeatability and reproducibility precision, it is necessary in an interlaboratory study to guard against excessively sanitized data in the sense that only the uniquely best operators are involved or that a laboratory takes unusual steps to get “good” results. It is also important to recognize and consider how to treat “poor” results that may have unacceptable assignable causes (for example, departures from the prescribed procedure). The inclusion of such results in the final precision estimates might be questioned.An essential aspect of collecting useful consistent data is careful planning and conduct of the study. Questions concerning the number of laboratories required for a successful study as well as the number of test results per laboratory affect the confidence in the precision statements resulting from the study. Other issues involve the number, range, and types of materials to be selected for the study, and the need for a well-written test method and careful instructions to the participating laboratories. To evaluate the consistency of the data obtained in an interlaboratory study, two statistics may be used: the “k-value”, used to examine the consistency of the within-laboratory precision from laboratory to laboratory, and the “h-value”, used to examine the consistency of the test results from laboratory to laboratory. Graphical as well as tabular diagnostic tools help in these examinations.