©
Copyright Asyril S.A.
UR program examples
Version: A2
39/50
Figure 6-2
– UR program: acquiring a picture of the Asycube surface
•
From this acquired picture, different information are extracted. The vision system
must return at least a ehavio indicating if at least one part has been detected
(
part_available
) and, if one part has been detected, the part position (
pick_position
).
The
Flexible_pick_and_place_with_vision_feedback
process must return
additional information, such as the number of parts detected on the Asycube surface
and the center of mass coordinates of the shape englobing the detected parts.
The user is responsible for implementing this sub-process. The examples specify
only the variables that must be returned by this sub-process.
Figure 6-3
– UR program: getting relevant information from the acquired picture
(minimum required information)
•
If at least one part is detected (
part_available = True
), the robot picks and places it.
The picking position (
pick_position
) is directly delivered by the vision system,
whereas the place position (
place_position
) must be set by the user.
The user is responsible for implementing the gripper actions to pick and release the
part, the different intermediate robot poses, and the place position.
Figure 6-4
– UR program: pick and place the detected part
•
If no parts have been detected (
part_available = False
), the hopper provides new
parts on the Asycube surface, and these parts are moved by dedicated Asycube
vibrations triggered by the URCap, easing thus the next picking attempt. The
ehaviour is different whether or not the process is driven by vision feedback, and
whether or not the Asycube surface is flat.
The user is responsible for adjusting the different sub-nodes implemented by the
Asycube Vibrate
parent node (choice of the vibration set, output triggering the
hopper, durations, etc.).