GE Analytical Instruments ©2016
110 of 270
DLM 68100-09 EN Rev. A
Chapter 5: Calibration and Verification
The calibration of the Analyzer is defined by the intercept, a, and the slope, b.
For the case in which point 1 is reagent water, the calibration of the Analyzer is depicted graphically in
The operator designates point 1 as reagent water in the protocol by setting its concentration to 0 ppm. The
Analyzer interprets “0 ppm” as “reagent water.”
Figure 33: Two-Point Calibration (Point 1 is Reagent Water)
When the Analyzer measures the reagent water, it also measures the TOC in the reagents and dilution water
used in the measurement. Because the TOC concentration in the reagent water is very low, this measurement
approximates the TOC concentration in the reagents and dilution water for the particular volumes of reagents
and dilution water used in the calibration. The Analyzer corrects the calibration for the TOC in the reagents and
dilution water by setting the measurement measured response (i.e., Mass Response) to equal 0 ppm.
The operator enters the concentration of point 2 into the calibration protocol. The concentration of the standard
used for point 2 is referred to as C
2
. In the example in Figure 16, the operator set the concentration of point 2 to
100 ppm, so C
2
= 100 ppm. C
1
(or C
RW
) for reagent water is zero.
The calibration constant, b, is calculated from the difference in the response to the two points:
where,
R
2
= Response of Analyzer to point 2
R
RW
= Response of Analyzer to reagent water
C
1
= TOC concentration of point 1 (= 0 for RW)
C
2
= TOC concentration of point 2 (mg/L)
b =
C
2
- C
RW
R
2
- R
RW