3 robotics takeaways from CRAV.ai

The Robot Report

Google’s Vincent Vanhoucke delivers a keynote at CRAV.ai about closing the perception-actuation loop using machine learning. | Credit: Jeff Burnstein/A3

After attending the hustle and bustle of huge robotics shows like Automate, the Collaborative Robots, Advanced Vision & AI (CRAV.ai) Conference brought a welcome change of pace. Held over two days in San Jose, Calif., this event featured focused technical sessions where experts shared insights from across the automation industry.

Here are my general takeaways from the conference, which was produced by the Association for Advancing Automation (A3).

Redefining human-robot collaboration

Until very recently, collaborative robots simply meant pressure pads and slowing down arm speed for the sake of safety. Not only are companies removing those types of constraints with smart software and vision systems, innovators are revolutionizing what it means for humans and robots to work together.

Some examples of next-gen human-robot collaboration: exoskeletons being used construction site exoskeletons, using AI to translate edges and vertices (machine vision language) into shapes and objects (human vision language) and human-machine-interfaces to both train machine learning and achieve 100% success rates (98% doesn’t cut it anymore).

Several presenters spoke about the importance of the visual aesthetic of robots as a critical aspect of a system’s adoption in the field. One cited study showed that minor humanoid features, like a face and arms, on top of a robot drastically improve people’s trust and use of a robot. Today, developers are already better at understanding how humans (critical thinking) and robots (accuracy + repeatability + speed) combine their strengths to produce efficiencies that are greater than the sum of its parts.

Feeding the AI beast

As expected, most picking and object recognition application demos at CRAV.ai involved some type of AI-like deep learning or neural networks. While there was a range of technical philosophies presented (structured vs unstructured learning), there was one common theme: AI is hungry for data. This need isn’t new, but the methodologies system designers use to collect this data with robots are still nascent.

Throughout the conference, I observed some common needs coming in the not-so-distant future:

  • Adding more sensors and vision systems
  • Growing software teams
  • Designing system architectures to support large data pipelines
  • Developing cloud infrastructure to house and process data.

A confluence of innovation and scale

Robotics and machine vision have been around for decades and span almost every industry at a huge scale. The influx of vision, robotics, and software startups have already disrupted this status quo, and companies like Google and Amazon are rolling out more cloud infrastructure to set the stage.

What can we surmise from these takeaways from CRAV.ai? First, this says that solving the automation problem is valuable – companies already understand the bottom-line business outcomes they can achieve with Industry 4.0. Second, solving the automation problem is technically complex. Much like the cobot systems we help design every day, the biggest winners will be those who embrace continuous collaboration between hardware makers, software developers, integrators, and end users.

About the Author

Jesse Masters is a Field Application Engineer at Zivid, where he leads all of the company’s field activities in North America. Zivid is a Norway-based provider of 3D machine vision cameras and software for next-generation robotics and industrial automation systems.

With more than two decades of in-house R&D and in-depth expertise in optical sensors, 3D machine vision hardware and software, Zivid enables a range of applications, including de-palletizing, bin-picking, pick-and-place, assembly, packaging and quality control.

The post 3 robotics takeaways from CRAV.ai appeared first on The Robot Report.

Source: therobotreport

Leave a Reply