
Technical Documentation
VeriSens
®
v2.11.0-B4
403/429
Baumer Optronic GmbH
Radeberg, Germany
16.5 Programming the UR with vision sensor
In the simplest case, the UR moves the vision sensor to different fixed waypoints so that image-based
feature checks can be performed there.
The task of finding objects by image processing is somewhat more complex. Image processing with the
vision sensor and UR complements the existing work with waypoints with a function for image acquisition
and a "second kind" of waypoint not set by hand but which receives coordinates from the image processing.
In addition, it is possible to evaluate result data from the vision sensor in the robot controller in the form of
variables.
NOTE
Please note that the image-based waypoints, unlike "classic" waypoints, do not have an
additional parent program point for the movement. The type of movement is individually
set directly in the image-based waypoint.
16.5.1 Node for job execution
The node of the vision sensor job execution is used to perform image processing. In other words, an image
is triggered, captured and, after some time, results are transmitted to the UR.
There are two types of result data:
Coordinates through the image processing of found objects in the form of an object list
Information in the form of named variables for the feature check of different criteria
The elements of the object list can be processed (i.e. approached) with the vision sensor Waypoint node.
Therefore, the job execution node must be used when using the vision sensor.
The vision sensor job execution node can be inserted wherever an image-based inspection is required.