In BriefArtificial intelligence has the potential to be a great boon to our society, but we must be able to properly understand and control its effects. Decisions made by an AI need to be simply explainable and understandable by humans.
It is the nature of technology to improve over time. As it progresses, technology brings humanity forward with it. Yet, there is a certain fear that surrounds technologies like artificial intelligence (AI) and robotics, in part due to how these have been portrayed in science fiction. This fear, however, is mostly a fear of the unknown. For the most part, humankind doesn’t know what will come of the continued improvement of AI systems.
The coming of the technological singularity is one such outcome that’s greatly influenced by science fiction. Supposedly, AI and intelligent machines will become so smart that they will overtake their human overlords, ending the world as we know it. We don’t know if that would indeed happen, of course — although there are some institutions that are actively working towards making the singularity happen.
But perhaps the most immediate concern people have with AI and automated systems is the expected job displacement that goes along with these. A number of studies seem to agree that increased automation will cause an employment disruption in the next 10 to 20 years.
One study predicts machines will replace 47 percent of jobs in the United States. Another study expects 40 percent of jobs will be displaced in Canada, while British agencies predict some 850,000 jobs in the UK will be replaced by automated systems. Meanwhile, 137 million workers in Southeast Asia are in danger of losing their jobs to machines in the next 20 years. The trend is expected to cover a whole range of industries, not just blue collar jobs.
What to Fear, Really
Given all of these, are we correct to fear AI?
Without the risk of being an alarmist, yes there are things to be worried about. But a great deal of this has to do with how we use AI, according to a piece written by ZD Net and TechRepublic UK editor-in-chief Steve Ranger. “AI is a fast-growing and intriguing niche,” Ranger wrote, “but it’s not the answer to every problem.”
Ranger warns of the inability of industries to cope up with AI, which could potentially cause another “AI winter.” He writes: “[A] lack of skilled staff to make the most of the technologies, along with massively inflated expectations, could create a loss of confidence.” Moreover, there’s the danger of looking at AI as the magical solution to everything, neglecting the fact that AI and machine learning algorithms are only as good as the data put into them. Ranger says, “ways must be found to make sure that AI-led decision making becomes as easy to understand — and to challenge — as any other type.” He sees this as the ultimate threat related to AI. He points out that research is being done when it comes to being able to understanding how AI reaches its conclusions. The five basic principals laid out are responsibility (a person must be available to deal with the effects of the AI), explainability (ability to simply explain the decisions made by the AI to the people affected by it), accuracy (sources of error must be kept track of), auditability (third parties should be able to easily review the behavior of the AI), and fairness (AI should not be affected by human bias or discrimination).
Ultimately, the greatest threat to humanity isn’t AI. It’s how we handle AI. “Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn,” Ranger concludes.
Measured and Monitored
Thankfully, there are institutions that have already come up with guidelines in pursuing AI research and development. There’s the Partnership on AI, which includes tech heavyweights like Amazon, Google, IBM, Facebook, Microsoft, and Apple. Another one is the Ethics and Governance of Artificial Intelligence Fund (AI Fund) that’s led by the Knight Foundation. There’s also the IEEE’s framework document on designing ethically aligned AI.
The benefits of AI are undeniable, and we don’t need to wait for 2047 and the singularity to figure out just how much it affects people’s lives. Today’s AI systems shouldn’t be confused with sci-fi’s Skynet and HAL-9000. Much of what we call AI right now are neural networks and machine learning algorithms that work in the background of our most common devices. AI is also found in systems that facilitate trends-based decision making processes in companies and improve customer services.