
IS-1500 User Guide
Thales Visionix, Inc.
MNL- 0024 (D)
Page 16 of 59
The VINS algorithm is robocentric, meaning that internally the features are described with a distance and bearing vector relative to
the tracker reference frame. Humans rely on binocular vision for depth perception and estimation of distances. However, the IS-1500
uses the monocular InertiaCam. To make up for this, the distance of a feature from the InertiaCam is calculated by using parallax.
Upon first finding features, the system can only roughly estimate their distance from the camera. As the InertiaCam is moved from
side to side, the movement of the features from frame to frame is compared to the precise velocity data from the NavChip. With this
information, the IS-1500 can accurately determine the distance of each feature. In practical usage, this means that the more the
InertiaCam moves about a feature, the more the system will learn about it, improving the precision of the tracking data.
Incidentally, this is also how the system determines that a feature is mobile. If the feature is moving, its distance and bearing will not
correlate with the IMU data, and it will eventually be rejected. It is simple to demonstrate this by placing a hand in the InertiaCam’s
field of view. If the hand is kept still, it is likely to be used as a source of natural features, as seen on the left of Figure 8. However,
when the hand is waved while the InertiaCam is kept still, the features are soon discarded and replaced by alternatives external to the
hand. When the system is uncertain about the feature depth, this will be indicated by a red circle or oval, as seen in features 2172 and
2173. The longer the oval, the greater the degree of uncertainty.
Figure 8 – VINS Hand Waving Test
If both the hand and the InertiaCam are in motion, it is less likely the features will be discarded. Instead the system will continue trying
to track off the mobile features. This will feed false pose data to the system and may cause problems with tracking. For accurate
tracking, if there are moving objects in the tracking area, it is generally best to keep them out of the InertiaCam’s field of view. If
this is not possible, keep the InertiaCam still so the system can discard mobile features and replace them with natural features.
The depth estimates calculated for each feature carry a degree of possible error. The amount of possible error scales with the distance
of a feature from the InertiaCam. This means that while features can be found a long distance away on the horizon, tracking off of
them may not yield reliable data. Beyond about two feet, the greater the distance between the detected natural features and the
InertiaCam, the less reliable the tracking data will be. However, features that are further away also typically leave the field of view less
frequently, providing more stable points of reference. A general guideline is to keep the InertiaCam field of view pitched about 20°
below the perpendicular of the horizon line while tracking. This will allow the majority of features to be found on the ground and in
nearby surroundings with a few still on the horizon. At walking speed, the features on the ground will be in enough frames to provide
accurate depth information before leaving the field of view, while features on the horizon provide stability.