Nowadays, industries use planning algorithms in logistics and control to schedule flights or manage the power grid, yet computer systems still lack the flexibility and decision making skills that a human observer possess. While researchers have armed planning algorithms with the capability to factor in for uncertainty, these remain hampered by their inability to improvise when abnormal conditions occur.
Now, researchers at MIT and the Australian National University (ANU) have announced they have made a planning algorithm that generates contingency plans should the initial plan prove too precarious. The algorithm also anticipates the conditions on which it will trigger a particular contingency plan.
Developing these backup plans takes additional time and computation, but the algorithm can provide mathematical guarantees that the risk of failure falls below a user set threshold. An example would be a researcher trying to develop a data-gathering plan for robot who might be satisfied with a 90 percent probability that the robot will take all the sensor readings but would also want a 99.9 percent probability that the robot won’t collide with a rock face at high speed. The algorithm treats these thresholds then as a “risk budget” that it has to spend as it traverses through the various paths through the graph. If a certain path exceeds the budget, it is then considered as an untenable path.
Using heuristics, computer equivalents of rules of thumb, the researchers were able to allow the algorithm to quickly evaluate branches. By developing these rules, it allows the algorithm to quickly decide whether a certain branch along the graph is viable or not. The heuristics have to be optimistic, potentially underestimating risk but not overestimating, such that the planner can weed out branches without compromising quality. In their paper, the researchers describe a method to produce linear approximations of probability distributions that are much easier to work with mathematically to resolve this problem.
This new research could prove to be helpful in situations where planning systems are commonly used such as NASA missions which utilized autonomous behavior in spacecraft. “The authors have tackled a very important problem that is highly relevant to NASA missions, where we are increasingly reliant on autonomous behaviors programmed into our spacecraft,” says Michel Ingham, technical group supervisor for the System Architectures and Behaviors group at NASA’s Jet Propulsion Laboratory.
“The authors have provided a method and capability for planning a robot’s activities, which optimizes the reward associated with accomplishing mission objectives while bounding the ‘execution risk’ taken by the mission. This means that our future mission operators could be able to specify how conservative or aggressive they want the spacecraft to be by choosing an appropriate risk bound and have some guarantee that the spacecraft will not exceed this level of risk.”