The White House hosted an AI symposium on Thursday where reps from Google, Amazon, Facebook and the like met with academics to talk about how to best secure America’s role as a world leader in artificial intelligence. They also announced the Select Committee on Artificial Intelligence, which will operate through the National Science and Technology Council and determine how best to leverage AI for American industry and market growth.

And while it’s great to see major investment in science and technology, it’s also safe to assume that the Trump administration — who once floated Rudy Guiliani as the country's chief of cybersecurity — might not be, uh, up to the task of running this task force.

The new AI task force aims to foster a free-market approach to technological advancement, according to AP. But history has shown time and time again through industries like airlines, big banks, and private healthcare companies that this, eh, doesn't always work out so well. Right now we’re in a unique window of time where a lot of incredible technology and algorithms are emerging and influencing a massive portion of everyday life. We're also at the point where we haven’t really figured out what that impact is actually going to look like in a socioeconomic sense, and how best to use these new tools to benefit the majority of people. And given that it’s impossible to talk about machine learning without having to add the caveat “No, this isn’t Skynet,” we should probably sort these things out sooner rather than later.

As such, here are two things that the Trump administration could do to make sure that AI research is fair and beneficial to everyone rather than giving massive tech companies free rein to do as they please.

1. Promote transparency and prevent bias in AI algorithms.

An AI can only work with the information fed to it. And given that people have all sorts of biases, conscious and unconscious, that means AI can be prejudiced as well. This gets especially problematic when it involves algorithms used to automate hiring decisions and police activity, which can reflect a society’s own bigotry.

While the idea of having a third-party algorithm audit to make sure your AI is as fair as possible has recently gained attention, this isn’t always feasible for more complex deep learning software that might be inscrutable.

To keep AI honest, the government could create a regulatory body to keep an eye on the algorithms under development. Like the FDA has to approve new pharmaceuticals and can post warnings about side effects, the government could create an administration that audits algorithms for bias and publishes consumer warnings for companies that use untested or potentially unfair AI.

This organization could also oversee how companies use and sell people’s data to make sure that their privacy isn’t being violated or their personal information compromised.

2. Protect and train workers at risk of automation.

Perhaps the biggest fear surrounding AI is that it will take the jobs of red-blooded, god-fearing American workers as more jobs become subject to automation. Big companies have every financial incentive to replace human workers, who expect outrageous luxuries such as weekends, bathroom breaks, and salaries with artificial intelligence that doesn’t need any of that.

It’s safe to say we’re not about to see Trump provide those displaced workers and their families with a universal basic income. But it would be reasonable to expect the government to mandate that any company that invest in automating its workforce operate with transparency about upcoming AI developments and provide training to the workers who lose their job.

Companies could be required by the federal government to announce which jobs might be replaced by artificial intelligence as soon as they decide to invest in the technology to do so and also to provide career training to the people who would lose their positions.

Not only would this help keep the American workforce armed with an up-to-date skillset for developing technologies, but it would also lead to a more transparent and less turbulent economy if people and companies are given enough warning to adapt.

After all, if these companies have the money to invest in developing AI that'd replace human workers, they definitely have the money to invest in the people themselves. It just might take governmental oversight to make them actually do so.


Share This Article