Sub MkIIIF
User Manual
Int. Approved
Document ref: ASF3-800-08
November 2018
Page 62 of 79
Copyright © 2009 Analox Ltd. All Rights Reserved.
Commercial in Confidence
10.1
User Calibration
The system features a semi-automatic calibration feature for zero and span of each of the sensors
(oxygen, carbon dioxide, pressure, temperature and humidity).
These adjustments are possible without internal access to the sensor unit, provided that the
sensors are near to their ideal outputs.
Correct calibration of the gas sensors is dependent on the correct operation of the depth sensor.
Therefore always ensure that the depth sensor is operating satisfactorily before calibrating the gas
sensors.
To ensure the most accurate performance, ensure that settling time is allowed for all gas readings.
Typically 2-5 minutes is required for adequate settling time.
10.2
Background to Calibration
The calibration operates by assuming that each sensor has a linear output. Two calibration points
are defined.
Each point relates an ‘x’ value (the actual output of the sensor) to a ‘y’ value (the
value defined by the user). By having two points (x
1
,y
1
) and (y
1
,y
2
), the system then translates
any measured value x to the corresponding value y.
y is always measured in the engineering units. These are mBar for O2 and CO2, Bar Absolute for
pressure, °C for temperature and %RH for humidity. For gases, y is entered as a percentage
(corresponding to the certified content of the calibration gas), and the system converts this to
mBar by accounting for the pressure at the time of the calibration.
The value ‘y’ is further processed into ‘customer units’ to account for different units of
measurement
–
for instance MSW or FSW for pressure, or volumetric % for gases. The customer
units are defined at the time of ordering from the factory.
The further apart the two calibration points, the more accurate the system will be over the full
range of measurement. If the points are too close together, then any errors in the original
calibration will be further accentuated the further we move away from the calibration points. This
leads to the concept of having a low point calibration Cal-L and a high point calibration Cal-H.
When a user performs a Cal-L, the system re-defines the stored values of (x
1
,y
1
).
When a user performs a Cal-H, the system re-defines the stored values of (x
2
,y
2
).
The system is also programmed with the ideal or intended characteristic, and hence when a user
attempts to re-define a calibration point, the system will first check that the measured value of x
corresponds within a reasonable accuracy to what is expected. If it does, then the calibration will
proceed correctly. If it disagrees, then a calibration fault is raised
–
leading to either a Cal-L fault
or a Cal-H fault. In the event of such a fault, the system protects itself by not accepting the new
settings. It is possible to clear the fault simply by cycling the power to the remote sensor (or to
the system). Or alternatively, the system can be correctly calibrated to clear the fault
–
for
instance if an incorrect value is entered for a calibration gas for example.
The CO2 sensor works in a similar manner, except the Cal-L point is always defined at zero. For
this reason, the CO2 calibrations are refered to as Cal-Z and Cal-Span.
Summary of Contents for Sub Mk III F
Page 2: ......