InstroTek
52
Standard Count Statistics
Taking such a series of 256 1-second counts will result in a
distribution of counts around a central value. The standard
deviation is a measure of the spread of these counts about the
central value. For a random device, such as the decay of a
radioactive source, the ideal standard deviation should be
equal to the square-root of the average value.
If the gauge is working properly, then the measured standard
deviation and the ideal standard deviation should be the
same, and their ratio should be 1.00. The Chi-Squared test is
used to determine how far the ratio can deviate from 1.00 and
still be considered acceptable. This is similar to expecting heads
and tails to come up equally when flipping an unbiased coin,
but accepting other distributions when only flipping a small
number of times.
For a sample of 256 counts, the ratio should be between 0.75
and 1.25 for 95% of the tests. Note that even a good gauge will
fail 5 out of every 100 tests. If the ratio falls too consistently
outside, it may mean that the counting electronics is adversely
affecting the counts. Generally, the ratio will be high when the
electronics are noisy. This might be due to breakdown in the
high voltage circuits or a defective detector tube. The ratio will
also be high if the detector tube counting efficiency or the
electronics is drifting over the measurement period (i.e. the
average of the first five counts is significantly different than the
average of the last five counts).
It will be low when the electronics is picking up a periodic noise
such as might occur due to failure of the high voltage supply
filter. This should be accompanied by a significant increase in
the standard count over its previous value.