The Silicon Valley arms race is headed to Washington.
DHS.ai
The US Department of Homeland Security (DHS) has unveiled an "AI roadmap," The New York Times reports. It's the first government department to announce sweeping AI measures, and has partnered with the Silicon Valley firms OpenAI, Meta-formerly-Facebook, and Anthropic to develop AI "pilot programs" to deploy throughout the government agency.
Its reason? If you can't beat 'em, join 'em.
"One cannot ignore it," DHS secretary Alejandro Mayorkas told the NYT, "it" being AI. "And if one isn't forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late and that's why we're moving quickly."
Sure! But some of the DHS' planned AI integrations range from eyebrow-raising to downright concerning.
For example, the agency says it plans to use chatbots to train immigration officials "on how to conduct interviews with applicants for lawful immigration," a use that feels riddled with ethical quandaries. The American immigration system is notoriously dehumanizing as it is; should the DHS seek to use actual non-human entities to train officials tasked with interfacing with immigrants? Consider the alarm bells rung.
Big Ifs
The DHS is also reportedly planning to use AI to comb through large amounts of data to develop better disaster-readiness plans. And while that sounds fine at face value, experts have warned that the persistence of harmful biases in AI could negatively impact vulnerable populations during emergencies. (The DHS plans do note that the agency is seeking to build "responsibility" AI that "avoids inappropriate biases," though that's a lot easier said than done.)
Among other applications, the DHS says it plans to use AI for data retrieval and even as a tool to generate summaries of documents "relevant" to investigations into the trafficking of child abuse materials, humans, and drugs. Again, these are important issues to tackle, and the DHS says that AI's ability to comb through large datasets and find patterns has helped in some of these cases. But given AI's well-documented penchant for hallucinating, it's unclear how useful a document-paraphrasing AI might be during the investigative process.
That all said, this is a pilot program, and the agency told the NYT that it will report the results of its AI attempts at the end of the year. If anything's for sure? The Silicon Valley arms race is evolving into a competition to ink government contracts, and we can likely expect to see more.
"We cannot do this alone," Mayorkas told the NYT. "We need to work with the private sector on helping define what is responsible use of a generative AI."
More on AI: SXSW Audience Loudly Boos Video about How AI Is Awesome
Share This Article