Drone deliveries, service robots in hospitals, and an army of robots helping warehouse workers have received plenty of attention during the novel coronavirus crisis. However, developers and users should remember to protect cybersecurity in the rush to respond to urgent needs, said Karen Panetta, dean of graduate engineering at Tufts University.
As the software stack for autonomous systems evolves and diversifies, designers will need to find common ground for sharing data, noted Panetta, who is also an Institute of Electrical and Electronics Engineers (IEEE) Fellow and a member of the IEEE Robotics and Automation Society. Safety assurances for robotics and artificial intelligence are essential to their continued adoption beyond the COVID-19 pandemic, she said.
AI, transparency key to cybersecurity
“So many people think of AI as this black box, like Zoltar in Big,” said Panetta. “To build these systems, we need robust and validated data to train them. There is a whole field of study around explainable AI.”
“Right now, with AI and cybersecurity, we’re looking at deep deviations and behavior,” she told The Robot Report. “Hackers have brought down drones by bombarding them with more commands than they could handle.”
“Another way to gather data is to rigorously test cases and remember to consider malicious intent,” Panetta said. “I worked with one student who was a designer, and we only looked at how things should work, but what happens if someone reroutes something? People don’t realize how insecure cars are.”
“It’s a big paradigm shift, with actuators and end manipulators as a key focus for safe and secure interactions,” she said. “Designers need to incorporate more AI to make their systems both more efficient and strong and to inspire confidence.”
Training data becomes a commodity
The amounts of data needed to train machine learning for robots and autonomous vehicles also pose problems such as potential bias and blind spots of human annotators, said Panetta.
“We’ve seen with epidemiology and demographic data the risks of bias,” Panetta said. “As robots start to take people’s temperature in hospitals and enter schools, people must become not just digital natives, but also AI natives.”
“It is a question of scale, and some researchers and startups are starting to automate the annotation process,” she said. “They’re using eye tracking to capture expertise and automatically annotating data for analysis.”
“The next frontier will be robots that can independently learn and share, which does not happen much right now,” Panetta added. “We can train robots and AI for certain cases, but if they have not seen them before, they don’t know how to react. Driving in Boston traffic and driving in the desert — being able to compare those experiences for training is in its infancy.”
“Right now, especially with autonomous vehicles, there’s a question of data collection and who owns it,” she said. “How could developers share it, as the data itself becomes the product? Entrepreneurs will find new ways to look for erratic data and abnormal behavior. It’s the same as with imaging technology and AI for detecting cancer.”
Simulation and safety
Developers are benefiting from simulation in AI approaches to tasks such as robotic grasping, and “multi-mode” models should lead to more complex and secure systems, said Panetta.
“One thing that has not changed in 25 years is that things will fail in boundaries,” she noted. “That’s where we need to simulate better. If you build a model, you don’t need to know everything, but mixed-mode simulations can encapsulate functions and have a low-level structural model that incorporates everything from behavior to plug-and-play architectures.”
“Even though we have huge compute power, it’s important to low-level test everything,” Panetta said. “More multi-mode simulation products are coming, and they include everything from CAD to digital logic. There are startups whose software can do huge simulations of electrical and mechanical systems, as well as the workspace.”
“Being able to inject faults, such as how a robot looks at a person who trips, will be instrumental to AI and safety,” she said.
More cybersecurity standards needed
“The IEEE supports more standards for AI, robots, and drones,” said Panetta. “With more applications, we need more coding standards for architecture and cybersecurity. There are some that exist for military systems, but they’re not enforceable for commercial ones.”
“For example, with standards for writing for drones, you could use AI to capture scenarios, authenticate code, and conduct self-checking on board,” she said. “Hardware has to come, too. In the instance of a hacker overloading a CPU, with built-in testing, the drone could go into safe mode if it got confused or encountered a condition it hadn’t seen before.”
“Many startups just want to deliver the application now and aren’t thinking about edge cases or misappropriated uses,” Panetta said. “The development of ‘white hat’ hacking will be huge for robot cybersecurity. Schools are now starting to look at ethics and cybersecurity for AI.”
Costs and capabilities
“For a long time, the cost for the biggest applications of robotics and AI was prohibitive,” said Panetta. “Facilities were used to hiring people to go in and disinfect or to take people’s temperatures.”
“In an IEEE AI survey, we discovered that parts of Asia and Europe were already using these technologies,” she said. “The U.S. was late to wholeheartedly adopt automation for these use cases. We’ve learned that online learning, adequately protecting low-paid workers, and guaranteeing access to food and medical supplies are critical.”
“We’ve been able to send robots to the moon and Mars to take samples, but the costs had to come down,” Panetta said. “Right now, robots can look for spills in grocery stores, but why can’t they clean them up at the same time? A big part of the exercise is getting the public used to robots, as developers manage expectations.”
R&D at the boundaries of human-machine interaction
“Tufts University has one of the few graduate programs for human-robot interaction in the country,” said Panetta. “Human behavior is unpredictable, so you need to abstract it for safe interactions.”
“One project is working with people with spinal cord injuries, and it turns out that many would rather have something more application-specific and cost-effective than an expensive manipulator that tries to do everything,” she said. “How to get something off the kitchen counter of the floor is different than opening a microwave.”
“All engineering students need to complete a capstone project,” Panetta explained. “When we first scope projects, they want their robots to do everything, but when they get their hands dirty, they see the complexity of the problems. They often don’t think at first about cost, power, or interdisciplinary factors like security.”
“That’s where the entrepreneurial pieces come in — all now need to know their customers and market,” she asserted. “So many people think they understand the end user but don’t, which is why companies fail. From cleaning to mobile robots, there is a lot of growth potential as perceptions shift.”
How has the COVID-19 pandemic affect robotics training and development? “There’s no turning back from online learning, and someone asked, ‘How can we do labs?'” Panetta replied. “Why can’t we do things with Legos and robots remotely? We’re seeing more distance education, simulation, telepresence, augmented and virtual reality, and tele-control. We’re starting to see this with remote surgery. Cybersecurity remains important as we build out the infrastructure for these.”
“We have to move past resistance for more robots in medicine, from health pre-screening to security,” she concluded. “We’ll see more small robots, which are easier to manufacture, more scalable, and need less power for dedicated purposes. Secure AI for more human-robot interaction is an opportunity for new jobs and technologies.”
The post Cybersecurity shouldn’t be overlooked during COVID-19 response, says IEEE expert appeared first on The Robot Report.