42
O&M Manual
Rev-H (6/19)
Part 8 – Calibration
8.1
Overview and Methods
Calibration of the Q46C2 is required to accurately match the sensor characteristics to the
monitor/analyzer. Since the output of the conductivity sensor does not degrade over time, it is
typically only required that the sensor be calibrated at initial installation and then cleaned
periodically, to maintain proper system accuracy.
Since the conductivity of a solution is greatly affected by temperature, proper settings for thermal
compensation are critical for accurate operation. Before calibrating the instrument for the very first
time, it is important to select the proper operating parameters in the configuration menus for
temperature compensation methods.
When using conductivity calibration standards for a wet calibration, take care not to inadvertently
contaminate the reference solution; always thoroughly clean the sensor, rinsing off in tap water,
and then finish rinsing in pure or de-ionized water. In addition, note that calibration solutions less
than 20
μS can be very unstable and are only accurate at the labeled reference temperature.
Moving the sensor back and forth between different value conductivity reference solutions can
quickly contaminate the solutions and render them inaccurate.
The system provides two methods of conductivity calibration: 1-point (wet calibration) and cell
constant. These two methods are significantly different. In addition, a sensor zero-cal is used on
initial installation to set the range zeros for the sensor used. See Sections 5.11 through 5.12 for
brief descriptions of their uses.
8.11 1-Point Calibration Explained
The 1-point calibration method is generally known as the "grab sample" calibration method. In the
1-point calibration method, the sensor may be removed from the application and placed into a
reference solution. It may also be left in the measurement process and calibrated by reference.
The 1-point calibration adjusts the sensor slope to match the exact calibration point. Readings
beyond that point are then extrapolated from the determined slope of the calibration line. Since the
sensor slope does not degrade over time, frequent re-calibration is unnecessary. Calibration
accuracy can be optimized by calibrating with a reference solution which is close to the values
typically measured.
8.12 Cell Constant Calibration Explained
In a cell constant calibration, the User simply enters the known cell constant of the sensor. This
value is labeled on the sensor along with the TC factor value. It is the recommended method of
calibration for highest accuracy. It is also the easiest and fastest method of initial calibration
because it involves no reference solutions. The Cell Constant method cannot be used if the sensor
cable length has been altered from the length at which it was originally ordered. If the cable length
has been altered, utilize the 1-point calibration method instead.
8.13 Zero Calibration Explained
Sensor offset must be set for the system only on initial sensor installation, or whenever the cable
length has been altered. The Zero Cal method establishes all of the sensor offset points for the
instrument’s ranges of operation
.