Despite how much they know about the natural world, ecologists and wildlife experts sit atop a treasure trove of unexamined information. They’re collecting a ton or data about the natural world, but it’s incredibly daunting (and, often, very tedious) to go through it all and figure out what it means.
One way they’re collecting data: Motion-capture cameras. They’re set up in the wild and are taking millions of photos of animals doing, well, animal things in their natural habitats. Once analyzed, they could reveal undiscovered patterns of behavior, or show us how to better protect the environment. But for the most part, we can’t be more specific about what those “animal things” are, because it takes people thousands of mind-numbing hours to sort through and analyze all of these photos.
Now, though, we might find out the lessons from those snapshots. Using an artificial intelligence system, scientists found a way to automate the whole process — their algorithm can recognize 48 different animal species and determine what that animal is doing in each photo, according to new research published in PNAS.
Those mind-numbing hours of sifting through photos? Scientists wouldn’t usually do it themselves. Often, they would rely on science-curious volunteers who agreed to look through and tag pictures. One group with a network of cameras, Snapshot Serengeti, that relies on citizen scientists to tag them, provided the millions of animal photos and their descriptions. The AI system went through those previously tagged photos and learned to identify species such as wildebeest and gazelle, and their actions, such as resting, eating, or interacting with each other. After sifting through all those photos, the AI system was practically as good as a person at tagging new photos recorded in the wild.
More specifically, the new classification tool is right 99.3 percent of the time. Human volunteers generally perform about 0.3 percent better. But the AI system’s lowered performance came from particularly difficult photos, like the close up of an impala’s leg that the AI mistook for a zebra. If those photos are left to the humans, the AI handles the rest just as well as the crowd-sourcing volunteers do. And even with that little bit of human assistance, the AI can classify the Snapshot Serengeti photoset in 17,000 fewer hours than it would take a team of humans to do the whole thing, according to the researchers’ calculations.
This new algorithm and other time-saving tools could completely change the game for biologists and ecologists. Lots of scientific discoveries happen when researchers dig back through the archives, and problems with wildlife observation leave gaps in our knowledge of the world. For instance, scientists only confirmed last year that wild aardvarks drink water, because no one had seen it happen.
Taking a big data approach to wildlife conservation might give us new strategies for how to better protect ecosystems that modern industrialization and development have been slowly killing. By using artificial intelligence to learn as much as we can — and more quickly than we could have only with a team of human volunteers — about how these animals act and interact, ecologists may find new ways to keep them alive as the world changes around them.