Comparison Cannot Be Perfect, Measurements Inherently Include Error

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Definition: 

Measurement is collection of quantitative data. A measurement is made by comparing a quantity with a standard unit. Since this
comparison cannot be perfect, measurements inherently include error.

Examples:

The length of a piece of string can be measured by comparing the string against a meter stick

Introduction

In a measurement process, measurement data may involve gross errors, which significantly

exceed true values, when a measurement quantity is measured repeatedly without significant

changes in measurement conditions. If the suspicious data remained during data processing, it

will cause incorrect assessment on measurement accuracy due to the distorted measurement

results have been used. Correct identification of gross measurement errors is an important issue

to achieve reliable measurement results.

Gross errors may be reduced by using suitable measurement devices and under appropriate

physical conditions. However, it would be quite difficult to avoid gross errors in measurement

processes having large quantities of data.

Gross error identification has been based on statistics. A number of criteria have been used such

as the 3σ criterion, the Chauvenet criterion, the Grubbs criterion, and the Dixon criterion [1]. The

existing methods [2-10] have been based on a typical distribution such as the Gaussian

distribution and require prior knowledge on the data that the measurement data conform to the

Gauss distribution or conform approximately to it. In a practical measurement, it may be difficult

to have a large quantity of data. Furthermore, the distribution may be found not conforming to

the typical distribution. These may make the statistical methods [2-10] not applicable for gross

error identification and removal.

In order to address the above issues to proceed with gross error identification and subsequently

removal, a new method using the grey system theory is proposed. The advantages of the

proposed new method is that the measurement data is not required to conform to a particular

probability density distribution and the sampling size of the data does not need to be large. The
principle of the gross error identification is presented and an identification criterion proposed. A

case study is provided to demonstrate the effectiveness of the proposed grey system method. The

proposed method should be a convenient and useful tool to identify gross errors in a precision

measurement process

Resolution is the minimum increment to which a measurement can be made. For us, this is a design parameter of
the X-Y stage motion encoders or galvo.

Accuracy is how close a measurement is to the "true" value. "Truth" is the traceability of the measurement to a
primary standard at NIST, a USA governmental agency, National Institute for Standards & Technology.  For us, this
is conformance to our customer's metrology.

Precision is repeatability.  For us, precision is half the increase in laser line width or hole diameter when the laser
is run repeatedly over the same part a number of times. 

All are plus/minus numbers except resolution.

Correct identification and elimination of gross errors in plant measurements is a key factor

for the performance of industrial on-line optimization. For rigorous nonlinear plant models,

the gross error detection problem is very challenging. This paper presents two data

reconciliation and gross error detection strategies for nonlinear models: one based on a serial

elimination algorithm using a linearized model and another one based on the Tjoa-Biegler

contaminated normal distribution approach. The comparison is based upon the results

obtained with a commercial on-line data reconciliation and optimization package using a

rigorous model of two industrial refinery processes.

1. INTRODUCTION

Data reconciliation is widely used to adjust the plant data and provide estimates for

unmeasured variables and parameters. Data reconciliation improves the accuracy in measured

variables and model parameters by exploiting redundancy in the measured data. Traditional

data reconciliation assumes that only random errors exist in process data. If gross errors also

occur, they need to be identified and eliminated. Otherwise, the reconciled solution will be

highly biased. Since data reconciliation is often used to provide better starting points to the
economic optimization, it is very important that gross errors will not significantly affect the

quality of reconciled data.

Statistical hypothesis testing techniques have been employed to detect persistent gross

errors [1]. However, correctly identifying all gross errors is still a challenging task, even for

steady state models. The existing strategies based on statistical tests sometimes wrongly

report a different number or location of gross errors than in reality. This problem occurs for

the following two reasons. First, a gross error may propagate in data reconciliation and

contaminate the reconciled data. Second, redundancy in measured data for a chemical or

refinery process may be low compared to the size of the process.

Most gross error detection and identification strategies have been designed for linear data

reconciliation models, such as plant mass flow balances. A linear data reconciliation problem

can be written as a constrained weighted least-squares minimization problem:

min F - (x- xD T y-1 (x- ~) - (x~ - xi )2

X i--1 O- i (1)

s.t. B(~ + a)- c

You might also like