Hurricanes, earthquakes, wildfires — the natural disasters that pummeled various parts of the United States and Mexico in 2017 crippled communities small and large. Collectively, these events killed thousands of people, also knocking out power for survivors. Though these disasters demand the use of grim superlatives — the strongest earthquake in 100 years, the largest and most destructive fires — some scientists argue they merely portend more frequent and destructive events as we continue to experience the effects of climate change.
But the way people and organizations respond to these disasters — indeed, who responds — is also evolving.
Cities rebuild after each event, as they have for generations. But the technology we use for rescue and recovery in the wake of these disasters has changed, empowering designated crews to help save more lives, but also enabling others — from tech startups to mere bystanders — to intervene. Is it ethical for these other parties to participate if they might cause more harm than good? And who should be held responsible if the technology doesn’t quite work as planned?
Under ideal circumstances, disaster response comes from a pyramid-like series of organizations. The U.S. National Response Framework (NRF), the federal organization that helps responders across all levels of government coordinate with private, public, and non-governmental organizations, stipulates that the lowest jurisdictional level possible should be the one to primarily handle incidents.
To start, responders and volunteer agencies at the local level handle the emergency and its aftermath. Local police units, firefighters, emergency medical personnel, and rescue workers rush to the site of a disaster to provide necessary aid in the immediate aftermath. If those resources are insufficient to meet those needs, the state government is asked to step in next. Typically, if a state is unable to meet those needs itself, its government will request assistance from other nearby states. If that is also insufficient, the federal government is called to step in. The Federal Emergency Management Agency, or FEMA, analyzes the request and recommends whether or not the government should respond. Once the president pronounces the event a national disaster, FEMA coordinates federal assistance. This includes help from other government groups like the Department of Housing and Human Services, Department of Energy, and the Army Corps of Engineers.
Hurricane Katrina, one of the U.S. government’s biggest disaster response failures, is an example of when that system breaks down. The slow speed at which the federal government responded to the Category 5 hurricane caused more deaths than would have happened with a quick response, experts later determined.
Fortunately, in the decade since Katrina, the government has become much quicker to respond to disasters. FEMA can now allocate resources in anticipation of an event instead of reacting after it happens. Social media ensures that the world can see the full extent of the damage — and get a clear picture if the reaction is mismanaged.
But some things haven’t changed. The government — the lumbering, bureaucratic entity that it is — can only respond so quickly in the event of a disaster. Because local and state governments are still the first line of responders, socioeconomic inequality continues to dog disaster management; less wealthy areas — like Puerto Rico, which filed for bankruptcy earlier in 2017 — aren’t able to rebuild as quickly because they lack the standing resources to do so.
That responder hierarchy, compounded by systemic inequality, often confuses Americans — we expect federal government assistance and technology in the wake of a disaster, though it’s far from a guarantee.
Historically, it’s been difficult to get governments to think about the technology needed for disaster response before the disaster happens because “people don’t want to donate money to those things ahead of time,” Gregory Hogan, a senior staff member at the Massachusetts Institute of Technology Lincoln Laboratory, told Futurism.
That vacancy has increasingly left room for companies to intervene with both experimental and proven technology.
In October 2017, one month after Hurricane Maria devastated Puerto Rico, more than half of the island was still without power, inhibiting thousands from accessing the internet. Without it, they couldn’t do all the things people need to do online after a disaster: ask for help, coordinate repairs on damaged homes, work, assure their loved ones that they were doing OK, or just stream some music or TV in an attempt to return to normalcy.
So that month, Project Loon, a collaboration between Google’s parent company Alphabet and cell phone carriers AT&T and T-Mobile, launched helium air balloons to bring internet to Puerto Rico. And it worked — the balloons, plus their ground connections, got more than 100,000 people online. Project head Alastair Westgarth wrote in a blog post, “We’ve never deployed Project Loon connectivity from scratch at such a rapid pace.” The company filed for approval from the Federal Communications Committee (FCC), received it, and launched its balloons all on the same day.
This was a major success for the project, but it wasn’t its first deployment. In 2015, Google signed an agreement with the Sri Lankan government to work on providing free internet for the island’s citizens. In February 2016, the tech was being tested in Indonesia, a significant milestone for the project.
If Puerto Ricans had not been in such dire straits after Hurricane Maria, the FCC probably would not have granted that license (an “experimental” one reserved for technology that’s not yet tried-and-true) nearly as quickly.
Alphabet wasn’t the only company to step in to help citizens after the hurricane. Car company Tesla donated solar panels and batteries to power a children’s hospital in San Juan. When Puerto Rico gets its power back, the setup might become permanent, NPR reported.
Much of this technology had performed well in previous tests, but what if it failed in Puerto Rico? Who, then, would be responsible for those unkept promises: the companies themselves? The organization or government agency that permitted the technology’s deployment?
After Hurricane Maria struck, Governor Rosselló wasn’t getting much aid from the federal government to manage the shortages of food and potable water, or to get the island’s electrical grid back online. That left the door open for private corporations to intervene. But Elon Musk and other companies can’t be everywhere at once, and assistance from private companies is never assured. There’s no guarantee that Tesla will have the financial means, or motivation, to step in and assist after the next devastating hurricane. Without legislation in place to negotiate the terms of prolonged support from private parties, what Musk giveth, he could also take away without warning.
— Tesla (@Tesla) October 24, 2017
Through significant research, Futurism was unable to discover any examples of failed technology that exacerbated the suffering of victims of natural disasters. There could be a few reasons why, including that media outlets are simply not inclined to report on these sorts of failures, or that companies work hard to sweep these failures under the rug. These seem unlikely, though, since the press as a whole has proven quite adept at taking down tech companies, from 23andMe to Theranos, that aren’t helping people in the ways they claim (and, indeed, there’s an argument to be made that companies seek to help victims of natural disasters for the spectacular press it provides, so they better be certain it works).
What seems more likely is that we’ve been extraordinarily lucky so far, that all of companies’ gambles have paid off and actually helped people. But without proper government oversight, our luck could soon run out.
Tech-based interventions after disasters aren’t just coming from companies — citizens themselves are getting involved, sometimes impeding the work of professionals with the expertise to help victims the most. That might call for a different sort of government involvement: one in which individuals who are supposedly trying to help could be penalized.
California’s 2017 wildfire season was one of the worst in recent memory. In October alone, fires in wine country reached $9 billion in costs and burned more than 245,000 acres. The California Department of Forestry and Fire Protection (CalFire) reported that the Tubbs fire in Sonoma County was the most destructive California wildfire in history. In attempts to capture footage of the rapidly-spreading fires, residents dispatched their drones to smoky skies. But in their efforts to use this technology to showcase the devastation, drone operators actually hindered firefighters. Firefighting aircraft typically fly just a few hundred feet above the ground — the same altitude as commercially-available unmanned aircraft. This creates “the potential for a mid-air collision that could injure or even kill aerial and ground firefighters, as well as residents on the ground below,” according to a CalFire announcement.
In these situations, law enforcement officials have decided that individual operators, not the manufacturers of the devices, should be held responsible for getting in the way of such missions; California Highway Patrol arrested one drone operator for impeding rescuers.
Robin Murphy, a professor of computer science and engineering at Texas A&M University, is very familiar with the issues surrounding unmanned aerial vehicles (UAVs) and drones. She’s “been in 27 disasters not of my own making,” she told Futurism, including 9/11, Hurricane Katrina, Fukushima, and Hurricane Harvey, to name a few. It’s not exactly bad luck that put her there; Murphy is the director of the the Center for Robot-Assisted Search and Research (CRASAR), a nonprofit corporation that oversees the deployment of robots — and the trained teams that operate them — in disaster areas.
Volunteers play an important role in the local response to disasters, but sometimes their response can actually inhibit more organized or trained responders. Murphy said it’s prohibited to show up to some kinds of disaster areas if you’re not invited, particularly if you’re using recreational drones, because you might make things worse. “You don’t bring experimental gear to a disaster,” she said. “What we see sometimes is someone who says, ‘My UAV’s great,’ but then there’s a software glitch.”
That glitch, or a lack of piloting experience, could cause the drone to plummet from the sky to hit a bystander, or into the path of a firefighting helicopter. What’s more, those recreational UAVs make it harder for approved aerial vehicles — like the ones Murphy helped deploy through CRASAR — to do their jobs after a disaster. Murphy has experienced this firsthand: when she and her team tried to get to Japan in the days after the 2011 tsunami, “We got blocked for two weeks because too many people came in and were in the way,” she said.
“I’m not trying to demonize people trying to help out, but rather raise awareness,” Murphy said. “With unmanned aerial things, you tend to get an ‘all rules are off’ attitude from pilots. But if there’s any moment in time that you have to pay attention to Federal Aviation Administration regulations, it’s during a disaster when there’s a dense use of airspace.”
Government-sanctioned technology is invaluable in disasters. After Hurricane Harvey, which hit hardest in Texas and Louisiana in late August 2017, CRASAR coordinated the largest known deployment of UAVs by public officials for a federally declared disaster in Texas. The 119 UAS flights checked how many people had not evacuated their homes, monitored water levels at levees and rivers, projected how long neighborhoods would be cut off by flooding, and systematically documented damage so federal disaster relief could be efficiently allocated and future planning could incorporate lessons from the event.
In the wake of a natural disaster, or while it’s happening, minimizing self-deployment of technology is crucial. Murphy said it’s important to have “things that actually work reliably when we talk about public safety,” being operated by designated parties who are wholly responsible for their equipment and the impact it has on safety.
To ensure that citizens don’t suffer when tech companies get involved after a disaster, policy-makers need to have more strict guidelines, Mark Graham, a professor of internet geography at the University of Oxford, told Futurism. He said governments need to lay down ground rules stipulating how the private sector can effectively participate during disaster situations. “In other words, this will mean that the scope for companies to ‘take advantage’ is drastically reduced,” Graham said. He also commented that states need to make sure that most of the response comes from the public sector — operated by the government — rather than the private sector. “When lives are at risk we should not be relying on the generosity of corporations that may not always be there,” he said.
In fact, past experience has shown that a collaboration between private enterprise and government spending can result in the most reliable and effective technology to respond to emergencies.
In 1998, the company iRobot won a DARPA contract to build a tactical mobile robot to disable roadside bombs and improvised explosive devices for the military. The contract led to the development of the 24-kilogram (53-pound) PackBot. The Packbot moves like a tank does, equipped with one long arm on top that allows it to reach out and grab objects or manipulate things like doorknobs or debris in the environment. It also has two flippers in front that help it navigate challenging terrain, and a camera on top.
Its development was just in time, in a morbid way — Packbot’s first test run was at Ground Zero after the terrorist attacks in New York City. For 10 days, five Packbots did search and rescue on the rubble pile, searching structurally unstable buildings surrounding Ground Zero that might have been unsafe for human rescuers to enter. iRobot’s chief operating officer at the time, Joe Dyer, told CNN that those robots were “literally pulled out of the the laboratory and taken to 9/11.”
Over the years, Packbot went on many more missions. In 2002, it was in Afghanistan, seeking out terrorists and booby traps in mountain caves, Tom Frost, the president of Endeavor Robotics (a company formerly a part of iRobot), told Futurism. Then in 2011, iRobot sent two Packbots and two Warrior robots, along with a trained team to control them, to the site of the Fukushima nuclear reactor meltdown, where they cleaned up debris and enabled workers to do radiation sensing from a safe distance.
The company’s robots undoubtedly served important, potentially life-saving roles after 9/11 and Fukushima. They continue to prove useful during the ongoing nuclear disaster clean-up, as they enable people to explore lethal environments without exposing themselves to danger. “If they [the robots] would be helpful, we would make them available,” Frost said.
Perhaps it doesn’t matter why private companies step up to provide aid, rescue efforts, and technology to restore critical infrastructure following a natural disaster. Companies like Endeavor Robotics and Tesla volunteered much-needed services at a time when they were necessary, and that is enough.
But the very fact that these companies needed to step in after Hurricane Maria ravaged Puerto Rico indicates that the way the U.S. manages disasters is far from perfect.
Emergency response, as Murphy noted, requires a surprising amount of nuance and subtlety. In the age of social media, how we respond to disasters is changing; survivor-activists can broadcast governments’ and companies’ response to disasters to screens small and large worldwide. Relying on technology from the private sector to do the heavy lifting, without accompanying government oversight, may be a step in the wrong direction. Disaster management remains the government’s responsibility, and that’s where it needs to stay.
We can still appreciate when companies jump in to help. The government should just tell them how high they should go first.