Although simulation can be a more affordable, safer, and reliable alternative to field testing of robots, it is not as widely used as it could be, found researchers at Carnegie Mellon University’s School of Computer Science. After surveying a variety of robotics developers, the scientists identified 10 challenges and areas for improvement in simulators.
In a paper released last month, Afsoon Afzal, Deborah S. Katz, Claire Le Goues, and Christopher S. Timperley described the results of a survey of 82 robotics developers. For instance, 85% of the respondents said they use simulators in testing.
However, the CMU team found barriers to simulation, including the gap between simulation and reality, a lack of reproducibility, and resource costs associated with using simulators. In addition, the researchers offered recommendations on how simulation software could be used more for verification and validation.
Here are the 10 robotics simulation challenges that the CMU study identified:
1. Reality gap
A number of participants said simulators does not sufficiently replicate the real-world behavior of the robot to a degree that is useful. This is a challenge when trying to use simulation and a reason not to use it in the first place.
Some of the respondents shared specific examples. In particular, realistically modeling stochastic processes — such as signal noise — and integrating those models into the simulation as a whole is a challenge. “A classic problem is integrating wireless network simulation with physical terrain simulation,” reported the researchers.
While the reality gap might be too large for some respondents, others said simulation can still serve as a valuable tool. “Software behavior in [simulation] is different compared to real, so not everything can be tested, but a lot can be,” wrote one respondent.
2. Simulators too complex
The time and resources required to set up a sufficiently accurate simulator could be better spent on other activities, noted some of the survey respondents.
Accurate simulation of the physical world is an inherently challenging process that naturally involves a composition of various models, said the CMU researchers. Alongside the essential complexity of simulation are sources of “accidental complexity” that do not relate to the fundamental challenges of simulation itself, but rather the engineering difficulties that developers face when trying to use simulation.
These sources of accidental complexity may ultimately lead users to abandon or not use simulation at all. Inaccurate, inadequate, or missing documentation can make it difficult to learn and use a simulator.
“Lack of documents for different platform types and sometimes wrong documentation makes us lose a lot of time working on [stuff] that will never work,” said one participant. “For example, the Gazebo simulator does not work well in Windows.”
In some cases, documentation may be written in another language. “The language was Japanese, but we don’t speak that language, so we couldn’t use well the simulator,” wrote another respondent.
Certain application programming interfaces (APIs) can make it difficult to extend the simulator with new plugins. A lack of support for industry-standard 3D modeling formats in widely used simulators such as Gazebo can make the creation of models a tedious and error-fraught process.
“Gazebo is the de-facto [simulator] right now and is poorly documented and difficult to customize to any degree,” said a survey participant.
3. Missing capabilities
Finding a simulator that provides all of the characteristics a user desires can be challenging, the survey found. As one respondent pointed out, simulators that do possess all of the desired qualities are very expensive. “Adding plugins is usually very challenging, and the only good frameworks that do any of this stuff well are very expensive (V-Rep and Mujoco, for example),” the respondent wrote.
The survey asked participants which simulation features they desired most but are unable to use in their current setups. Some of those mentioned were the ability to simulate at faster-than-real-time speeds, native support for “headless” execution, and an easier means of constructing environments and scenarios.
One participant asked for the “ability for controllable physics fidelity. First order to prove concepts then higher fidelity for validation. Gazebo doesn’t have that.” Other capabilities specified by participants include native support for multi-robot simulation and large environments.
The lack of reproducibility and presence of non-determinism in simulators also leads to difficulties in testing, reported survey participants.
One respondent said a “lack of deterministic execution of simulators leads to unrepeatable results.” This points to a need to accurately reproduce system failures that are discovered in testing, in order to diagnose and debug those failures.
If a tester cannot consistently reproduce the failures detected in simulation, it will be difficult to know whether changes made to the code have fixed the problems.
5. Scenario and environment construction
Testing in simulation requires a simulated environment and a test scenario. Participants reported difficulty in constructing both test scenarios and environments. “Setting up a simulation environment is too much work, so I don’t do it often,” said one participant.
“Scripting scenarios was not easy,” said another. “Adding different robot dynamics was also not easy.”
Respondents to the CMU survey said they wanted to be able to construct such scenarios more easily or automatically.
“Making URDF [Unified Robot Description Format] files is a tremendous pain, as the only good way to do it right now is by hand, which is faulty and error-prone,” wrote a participant.
6. Simulators demand resources
Simulation is computationally intensive and often requires specialized hardware, such as graphics processing units (GPUs). However, participants reported that hardware requirements contribute strongly to the expense of simulation.
These costs are compounded when tests are run multiple times, such as in test automation.
Participants also reported difficulties running simulations in parallel or taking advantage of distributed computing across several machines. Simulations of large environments and long duration, became too demanding of resources to be practical, they said.
7. No automated testing
Although a graphical user interface (GUI) is an important component of a simulator, participants said they prefer running the simulator headless, or without the GUI, for test automation. Disabling the GUI eliminates the computational overhead of the simulator caused by rendering heavy graphical models. Participants cited the inability to run a simulator headless as a major challenge for automation.
“Making the simulator run without GUI on our Jenkins server5 turned out to be more difficult than expected,” said one survey participant. “We ended up having to connect a physical display to the server machine in order to run the simulation properly.”
The ability to set up, monitor, and interact with the simulation via scripting and without manual intervention was also considered vital for automation. Participants reported the need to devise creative solutions in the absence of support for scripting.
For example, one respondent wrote that “URSim needs click-automation to run without human interaction.”
8. Continuous integration
According to the CMU researchers, continuous integration (CI) is an emerging technique in automated software maintenance. CI systems are used to automate the building, testing, and deployment of software. However, survey participants said they faced difficulties engineering the simulation to be used in CI and run on cloud servers.
Many of these difficulties, the researchers concluded, arise from lacking automation features and high costs discussed earlier as challenges.
9. Simulators need to be more reliable
One of the challenges of using a simulator in a test automation pipeline is the reliability of the simulator itself. Participants reported unexpected crashes and problems with timing and synchronization while using the simulator in automation.
One respondent said that ensuring clean termination of the simulator is a challenge. When the software crashes, it should properly store logs and results, as well as properly kill all processes to prevent resource leaks, before ending the simulation.
10. Interface stability
Participants reported unstable and fragile interfaces as a challenge for automation. For example, one respondent mentioned “APIs are pretty fragile and a lot of engineering need to be done to get it working.”
Several participants reported difficulties in integrating existing code or infrastructure with simulation APIs. More specifically, participants desired better integration of simulators with the Robot Operating System (ROS).
Recommendations for robotics simulators
Simulation software should be easier to use, both for basic and advanced purposes, wrote the CMU researchers. It should also be able to support complex, large-scale environments that more closely resemble the physical spaces in which robots are actually deployed. In addition, simulators should be built to support scalable automation, they said.
Software providers could make simulators easier to use by eliminating sources of complexity, introducing user-friendly features, and improving documentation, wrote Afzal, Katz, and company. Examples of the features requested by survey respondents include a Web interface by default rather than a traditional GUI.
Participants also said they want support for models written in industry-standard formats, as well as augmented reality visualizations. Such changes would reduce the learning curve and the amount of time needed for functions such as automated testing, the report noted.
The scope and capabilities of simulators could be expanded to support more realistic simulations of real-world robot deployments for a wider range of domains. To do so, simulators must represent detailed environments that may contain multiple robots, and they should achieve greater physical fidelity without increasing resource costs, said the report.
The study’s authors also recommended that simulators provide interactive tools to enable developers to easily design and generate vast scenarios and environments.
The post 10 challenges of using simulators for testing robots appeared first on The Robot Report.