Measurement error is unavoidable. There will always be some measurement variation that is due to the measurement system itself.

Most problematic measurement system issues come from measuring attribute data in terms that rely on human judgment such as good/bad, pass/fail, etc. This is because it is very difficult for all testers to apply the same operational definition of what is “good” and what is “bad.”

However, such measurement systems are seen throughout industries. One example is quality control inspectors using a high-powered microscope to determine whether a pair of contact lens is defect free. Hence, it is important to quantify how well such measurement systems are working.

The tool used for this kind of analysis is called attribute gage R&R. The R&R stands for repeatability and reproducibility. Repeatability means that the same operator, measuring the same thing, using the same gage, should get the same reading every time. Reproducibility means that different operators, measuring the same thing, using the same gage, should get the same reading every time.

Attribute gage R&R reveals two important findings – percentage of repeatability and percentage of reproducibility. Ideally, both percentages should be 100 percent, but generally, the rule of thumb is anything above 90 percent is quite adequate.

Obtaining these percentages can be done using simple mathematics, and there is really no need for sophisticated software. Nevertheless, Minitab has a module called Attribute Agreement Analysis (in Minitab 13, it was called Attribute Gage R&R) that does the same and much more, and this makes analysts’ lives easier.

Having said that, it is important for analysts to understand what the statistical software is doing to make good sense of the report.

So What If the Gage R&R Is Not Good?

The key in all measurement systems is having a clear test method and clear criteria for what to accept and what to reject. The steps are as follows:

  •     Identify what is to be measured.
  •     Select the measurement instrument.
  •     Develop the test method and criteria for pass or fail.
  •     Test the test method and criteria (the operational definition) with some test samples (perform a gage R&R study).
  •     Confirm that the gage R&R in the study is close to 100 percent.
  •     Document the test method and criteria.
  •     Train all inspectors on the test method and criteria.
  •     Pilot run the new test method and criteria and perform periodic gage R&Rs to check if the measurement system is good.
  •     Launch the new test method and criteria.