How to maximize machine vision guided robotics

The Robot Report

Machine vision systems are increasingly being used to guide robot actions — a trend that has grown out of recent advances in affordable machine vision technologies and industrial computing power. When coupled with 2D or 3D sensors, robots can be made to perform a wide variety of tasks, from basic inspection to more complex pick-and-place operations.

But to truly reap the benefits of vision-guided robotics, you need to select the right system for your application. Today, you have several types of machine vision systems to choose from, each with its own system requirements and sensor technologies. How do you know which one is right for your application? To determine this you need to consider the needs and goals of your operation, including the size and orientation of workpieces, as well as processing times.

In this article, we will consider the criteria that can help you select the best machine vision system for your robotics application. We will also discuss some of the hidden costs associated with 3D machine vision systems, which can help drive your decision.

An overview of 2D machine vision

Before we get to 3D machine vision, let’s review 2D machine vision. 2D machine vision is typically used for inspection tasks, such as checking the dimensions, features, and orientation of parts as they move along production lines. These systems work by generating flat, two-dimensional maps of reflected intensity, or contrast, making lighting an important factor in these applications. Because too much or too little light can throw off the accuracy of  images, it’s important to consider ambient conditions, artificial light and shadows in order to capture part edges and features clearly.

Although having only X and Y data is sufficient for many applications, like simple object tracking, 2D vision systems have their limits. For one, they render real-life, three-dimensional objects as flat, 2D projections with no depth of field. This lack of a third dimension presents a challenge for tasks that rely on object shape and orientation, such as bin picking.

Like 3D machine vision systems, 2D systems are also sensitive to lighting conditions. Natural light sources, such as windows or skylights, can create affect sensor readings. Sometimes adding an enclosure or shroud to reduce these conditions increases the success of these applications.

Even with some limitations, 2D vision systems are cost-effective and easy to implement in many applications. Examples include quality inspection, part detection, optical character recognition, barcode reading and many more.

An introduction to 3D machine vision

Although machine guided robotics can involve 2D sensors, these applications typically use 3D vision systems, which operate in conjunction with higher-performing six-axis or SCARA robots. There are several 3D sensor technologies to choose from, including laser displacement, structured light, and point cloud, which involves generating a list of three-dimensional coordinates to represent an object’s surface in space. The 3D camera generates the point cloud, and then image processing software analyzes the point cloud file to guide the robot.

Mitsubishi vs3d config

Source: Mitsubishi Electric

Unlike 2D sensors, which generate flat images of objects, 3D sensor technologies can guide robots in complex pick- and-place and inspection applications. They can also handle unstructured part orientations.

In terms of setup, you can integrate 3D vision systems with robotic cells in different ways.

For example, you can attach small, lightweight 3D sensors to the robot hand in what’s known as an end-of-arm configuration or mount the camera above the robot with the lens pointing downward at the robot’s workspace.

The speed of these configurations depends on several factors, including processing times and how long it takes to move to the pick location. In terms of their benefits:

  • The end-of-arm configuration is more flexible, allowing the robot to move the camera to inspect parts with unique orientations, as well as areas that are difficult to access. Keep in mind, this configuration can make your process slower because you have to wait for the robot to move before you can capture an image. You therefore need to factor in the robot’s repeatability.
  • Fixed configurations accommodate a larger field of view, as they are not limited by the reach of the robot. In addition, the camera can take pictures while in motion, reducing cycle times. You also don’t have to worry about the robot’s variance because the camera position is always known. These benefits make this configuration the preferred method when possible.

Applications and Benefits

3D vision systems have many advantages—some of which overcome the shortcomings of 2D machine vision, which typically only provides object information in the X and Y dimensions. While it’s true that some 2D systems can infer simple data in three dimensions, they’re mostly limited to the X-Y plane.

e-vs3d Mitsubishi

Source: Mitsubishi Electric

3D systems generate much richer data in all three directions, making them ideal for complex robotic tasks that need to cope with diverse object shapes and orientations. When properly deployed, 3D vision systems are also highly repeatable and can avoid errors due to object location, orientation and presentation to the sensor.

Because 3D vision systems excel at handling the intricacies of three-dimensional workpieces, they’re ideal for applications that are less organized in nature and involve a random presentation of parts.

One example is bin picking, in which the camera detects and analyzes the randomly piled parts in a bin. Using this information, it then guides the robot to pick up individual workpieces for the next step in the production process. 3D vision systems have the capability of picking parts that have variable surface conditions, such as welded parts or parts that need to be deburred.

Another benefit of 3D vision systems is their ability to match parts using registered 3D CAD models. Some 3D systems also offer technologies that allow part matching without the need to compare parts to a CAD model on the fly. Because less processing time is spent than when comparing the sensor image with a CAD model, this “model-less” matching technology strikes a good balance between the ability to pick randomly oriented parts and also pick speed (see below).

Model-less versus model-matching modes

The MELFA 3D machine vision system allows users to choose between model recognition and model-less modes for robotic workpiece gripping:

  • Model-less: A recognition method that registers the shape of the hand or suction pads and then matches the hand shape to suitable grip locations on the part. Registering the workpiece shape as 3D CAD models is not required.
  • Modelmatching: A recognition method that registers workpiece shapes as 3D CAD models. It then searches for workpieces that match these models in order to identify workpiece posture and grip location.

Hidden cost considerations

Despite the many benefits, there are some cost considerations associated with 3D machine vision systems, and not all of these costs are related to the vision hardware itself. For one, 3D vision often involves extra programming and integration requirements, as well as requiring higher-quality CAD data. You also need to account for the cost and complexity of any auxiliary components like end-of-arm tooling. These tools, which can drive up cost in any robotic system, include suction pad or parallel grippers, sensors and welding torches, as well as tables, fixturing and 2D sensors.

This scenario could apply to any machine vision system, but 3D systems can exacerbate this issue due to the processing time added by CAD model matching.

That being said, if you want to minimize the hidden costs of 3D vision and maximize its benefits, it’s important to pick the right type of vision system for the job at hand. To illustrate how different systems target different applications, consider two 3D systems:

  • The MELFA 3D system is a clear choice for applications that don’t require model matching to grab and orient parts. It is also ideal for smaller part sizes and features good part orientation capabilities.
  • The Canon system excels whenever full model matching is needed to meet the part handling objectives. It can also handle larger parts and bins than the MELFA system due to its partial CAD recognition feature, which allows the system to recognize a part that is not entirely within the camera’s field of view.

The right machine vision system for you

Picking the right machine vision system for your application is a complex topic that typically requires some engineering hours. However, the fundamentals boil down to a combination of part size, diversity and orientation, requirements for robot processing times and system costs. Oftentimes, these selection criteria will point you in a clear direction when it comes time to selecting a 3D vision system.

To learn more about how to select the best machine vision system for your application, visit:
https://us.mitsubishielectric.com/fa/en/products/industrial-robots-melfa/intelligent-options.

Adam Welch MitsubishiAbout the author

Adam Welch is a product manager for robotics at Mitsubishi Electric Automation Inc.

The post How to maximize machine vision guided robotics appeared first on The Robot Report.


Source: therobotreport

Leave a Reply