
8. PROCESSING RESULTS
(a)
(b)
Figure 10: Example for (a) left camera image and corresponding disparity
map.
very accurate measurements, but causes a high computational load, and thus
lowers the achievable frame rate. SceneScan supports a congurable disparity
range (see Section 9.3), which allows the user to choose between high-precision
or high-speed measurements.
It is possible to transform the disparity map into a set of 3D points. This
can be done at a correct metric scale if the cameras have been calibrated prop-
erly. The transformation of a disparity map into a set of 3D points requires
knowledge of the disparity-to-depth mapping matrix
Q
, which is computed
during camera calibration and transmitted by SceneScan along with each dis-
parity map. The 3D location
x y z
T
of a point with image coordinates
(
u, v
)
and disparity
d
can be reconstructed as follows:
x
y
z
=
1
w
·
x
0
y
0
z
0
, with
x
0
y
0
z
0
w
=
Q
·
u
v
d
1
When using the
Q
matrix provided by SceneScan, the received coordinates
will be measured in meters with respect to the coordinate system depicted in
Figure 11. Here, the origin matches the left camera's center of projection. An
ecient implementation of this transformation is provided with the available
API (see Section 10.4).
SceneScan computes disparity maps with a disparity resolution that is be-
low one pixel. Disparity maps have a bit-depth of 12 bits, with the lower 4
bits of each value representing the fractional disparity component. It is thus
necessary to divide each value in the disparity map by 16, in order
to receive the correct disparity magnitude.
SceneScan applies several post-processing techniques in order to improve
the quality of the disparity maps. Some of these methods detect erroneous
16