©
National Instruments Corporation
5-1
5
Calibrating the Device
This chapter discusses the calibration procedures for the AT E Series
device. NI-DAQ includes calibration functions for performing all of the
steps in the calibration process.
Calibration refers to the process of minimizing measurement and output
voltage errors by making small circuit adjustments. On the AT E Series
devices, these adjustments take the form of writing values to onboard
calibration DACs (CalDACs).
Some form of device calibration is required for all but the most forgiving
applications. If no device calibration were performed, the signals and
measurements could have very large offset, gain, and linearity errors.
Three levels of calibration are available to you, and these are described in
this chapter. The first level is the fastest, easiest, and least accurate, whereas
the last level is the slowest, most difficult, and most accurate.
Loading Calibration Constants
The AT E Series device is factory calibrated before shipment at
approximately 25 °C to the levels indicated in Appendix A,
The associated calibration constants—the values that were written to the
CalDACs to achieve calibration in the factory—are stored in the onboard
nonvolatile memory (EEPROM). Because the CalDACs have no memory
capability, they do not retain calibration information when the device is
unpowered. Loading calibration constants refers to the process of loading
the CalDACs with the values stored in the EEPROM. NI-DAQ determines
when this is necessary and does it automatically. If you are not using
NI-DAQ, you must load these values yourself.
In the EEPROM there is a user-modifiable calibration area in addition to
the permanent factory calibration area. This means that you can load the
CalDACs with values either from the original factory calibration or from
a calibration that you performed subsequently.
This method of calibration is not very accurate because it does not take into
account the fact that the device measurement and output voltage errors can