As 2019 winds down, every news media and technology site is pausing to reflect on the biggest trends of the past year. Some aspects of robotics and artificial intelligence are advancing rapidly; others, not so much. What should AI and robotics developers recall as they prepare for a new year?
The Robot Report spoke with the following leaders at AI and robotics companies about their observations from the past year:
- Sudhir Jha, senior vice president and head of Brighterion Inc., a Mastercard company that uses AI and machine learning to provide mission-critical business intelligence in real time, regardless of type, complexity, or volume
- Max Versace, co-founder and CEO of Neurala Inc., whose Brain Builder software platform applies AI to visual inspections
- Thomas Visti, CEO of Mobile Industrial Robots ApS (MiR), which makes collaborative autonomous mobile robots (AMRs)
In your opinion, what was the biggest news in AI and robotics in 2019?
Visti: The biggest news in robotics this past year is a combination of two big stories — the introduction of autonomous mobile robots for higher-payload materials, as well as the ability for these AMRs, and even those focused on lighter payloads, to benefit from advancements in artificial intelligence.
Innovative AI capabilities drive improved efficiency in both path planning and environmental interaction. Instead of reacting the same way to all obstacles, mobile robots can learn as they go. For example, they can distinguish between human workers and a forklift and adapt their driving patterns accordingly.
AI also enables AMRs to avoid high-traffic areas during specific times, such as when goods are regularly delivered and transferred by fork truck or when crowds of workers are present, such as during breaks or shift changes. We’ll definitely see the benefits of combining AI and robotics in the months and years ahead.
Versace: One of the biggest trends in 2019 has been organizations taking AI beyond the hype and actually applying it to real-world use cases. The past few years have seen a lot of buzz around the technology, but we need to cut through the noise and develop AI that can actually be implemented across industries.
Take the retail/grocery industry as an example. These companies were among the first to adopt AI and deploy robots at scale. Companies like Badger Technologies are leaders in working with AI companies to deliver robots that retainers can deploy in supermarkets across the country.
These robots help with tasks such as inventory management, increasing store safety, and elevating the customer experience. We’ve seen more and more examples of companies approaching AI with these real-world applications in mind.
What technologies and techniques have improved the most lately, and which capabilities do you still want to see?
Versace: We’ve seen progress when it comes to implementing new approaches to deep neural networks (DNNs). People may not realize that there are actually several approaches to DNNs — think of them as different “shades” of neural networks that fall along a spectrum. The one you choose can make the difference between a failed experiment and a successful deployment.
Emerging approaches such as Lifelong-DNN (L-DNN) are moving the industry toward new, more brain-inspired approaches that are able to add new information to AI algorithms on the fly. This means that rather than starting from scratch each time you want to improve the network, you can continue to train it incrementally as you see its strengths and short fallings when deployed in the field.
While many people think that’s how all AI works, it has actually been a real challenge in the industry to develop methodologies that support ongoing learning without complete retraining. Even more importantly for robotics, L-DNN moves learning to the compute edge. No server required. This means that the need to ping a server for training disappears.
In terms of what capabilities I’d like to see, I’m interested in platforms doing the work to connect the technology to the use case. We’re seeing a lot of general-purpose solutions, but specificity is a strength in AI, and I’m looking at technology that is easy to customize taking the lead this year.
Jha: There were continued improvements in large-scale machine learning (in particular deep learning), better understanding and generation of languages for much better conversational interfaces, and more accuracy in machine vision to provide better robotic perception. All of this has allowed robots to come out of cages and interact more readily with humans to solve complex problems.
In parallel, we also had a much stronger realization that ethics/bias/trust in AI needs to be center stage and not be an afterthought for developers and researchers. As AI makes more and more decisions in our everyday lives, we need to ensure that these decisions are fair and can be trusted.
What’s the most persistent misperception of AI and robotics that you encounter?
Versace: One of the biggest misconceptions about AI and robotics is that once AI is deployed, you never have to touch it again. In fact, because systems and processes are always changing, the opposite is true.
Instead, organizations need to shift their mentality to look beyond today in order to be successful. AI is continuously being challenged by emerging use cases or changing conditions, so robotics companies need to be designing AI that is able to adapt to new real-world scenarios, as they are encountered in real time, as opposed to building a solution and assuming it will work for eternity.
Jha: People continue to feel more threatened by AI and robotics than any other technology. While some of this is understandable due to the wide applications and somewhat opaque nature of the technology, a lot of this is driven by what people may have seen on screen or read in stories [that] is very different than what is the current reality.
We are nowhere near designing a machine that can think on its own and understand/display emotions to seek destruction of human beings. Most AI-based systems are trained to perform a handful of tasks efficiently and learn from the data that is fed to them. They are very good at processing vast amounts of data and drawing lots of conclusions to pick the best options, and in this limited domain, they are often better than humans.
But this doesn’t make them more powerful than the humans who decide in which domains to use the machine or what datasets they would have access to. As with any other technology, we still have to be careful to ensure that robots and AI are not exploited by malicious people.
What was your biggest success in 2019?
Visti: MiR’s biggest success was our booming international growth, as we see more of the world’s largest companies invest in our robots for multiple sites. To accommodate this growth, we’ve opened larger offices throughout the world, including a new collaborative automation center in Barcelona with Universal Robots and a much larger U.S. headquarters in New York.
Teradyne’s industrial automation strategy makes it a strong partner for us. [Editor’s note: North Reading, Mass.-based Teradyne owns MiR, Universal Robots, Energid, and AutoGuide Mobile Robots.] It has the experience, capital, and power to help bring more advanced automation to companies of all sizes to make the workplace more productive, safer, and able to produce higher-quality products. While we continue to run very independently, we definitely take advantage of their expertise, and it continues to pay off for both companies.
Note: These experts will look ahead to AI and robotics in 2020 in another article to be posted soon.
The post AI and robotics execs look back at tech trends of 2019 appeared first on The Robot Report.