The University of Cambridge has created a new system called Segnet that can look at a road and remember various landmarks like street signs, road markers, people, and even the sky. The system observes an RGB image of a road and then uses Bayesian analysis of the scene to classify it into different layers.
According to the public release, “The first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories — such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.”
See it at work in the video below.
The second part of the system lets a vehicle orient itself regardless of what position it is currently in. This essentially means it can “look” all around itself and determine its location to within a few meters and a few degrees. In effect, this system is a significant improvement over GPS, as it requires no wireless connection to assess and establish its position.
There is a demo of SegNet available for download now that can be sent down a random road in any user’s vicinity. After processing random images of roads, the system will tell the user what it sees.
Since this system is not reliant on GPS and, instead, focuses on machine learning in 3D space, it has a lot of potential, although the technology has not quite been perfected yet.
“In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance,” research leader Professor Roberto Cipolla said. “It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”