In 2016, some Google employees shared a video that was both inspiring and unsettling. In the nine-minute film dubbed “The Selfish Ledger,” a narrator calmly, compelling presents the idea that a ledger of data generated by human users could be used to achieve a larger societal goal.

“What if we focused on creating a richer ledger by introducing more sources of information?” the narrator posits. “What if we thought of ourselves not as the owners of this information, but as transient carriers, or caretakers?”

In other words, it imagines a future of total data collection in which a company such as Google can subtly nudge users into alignment with objectives that improve their own lives, via environmental sustainability or improved health, for example, and that align with Google’s view of the world also. Eventually the company can custom-print personalized devices to collect more and different types of data, gaining a more detailed picture for each user. The net result: the company guides the behavior of entire populations to help solve global challenges such as poverty and disease.

This vision of the future is, in a sense, inspiring — who wouldn’t want a world without pandemic disease and poverty? — but there’s also something deeply unsettling about it. The video envisions a future in which goal-driven automated ledgers become widely accepted; it is the ledger, rather than end user, that makes decisions about what might be good for individuals and society at large, seeking to fill gaps in its knowledge with a cold precision that seems as if it were ripped from an episode of Black Mirror.

This vision of the future is, in a sense, inspiring — who wouldn’t want a world without pandemic disease and poverty? — but there’s also something deeply unsettling about it.

Like other firms spearheading the development of artificial intelligence (AI), Google wants more out of its users. The company is increasingly inquisitive about who they are, assertive in how it wishes to interact with them, and pushes limits about what they consider to be acceptable intrusions into their lives. Instead of unleashing a robust negative reaction, many users appear to be welcoming the invasion. We have already been “programmed” to accept Google’s (and other companies’) unsolicited overtures, such as when Google Maps plans routes for us to travel in our daily routine or when Facebook puts together photo albums for us without being asked to do so. We already consider them normal and acceptable.

The ethical use of AI is a matter of public discourse but Google (and others) seem unfazed by the potential dark side of their products and practices. We know this because they keep pressing forward to implement their visions of the future – visions they may not necessarily see a need to reveal to the public. Google wants to understand and control the future before it occurs by, in essence, creating it. And it’s using AI and machine learning (ML) to help interpret and manage it.

Image Credit: Adobe Stock/Victor Tangermann

Our collective technological future is unfolding so quickly that no single government or company will be able to control it. On one hand that is good, because, from an ethical perspective, no single entity should be able to control it. On the other hand, if there is no real or realizable oversight over the use of AI, there may be little to reason to believe that an AI-dominated future will result in being a net positive for humanity.

So, is Google to be commended for attempting to contain and craft the future, or should it be feared and resisted at every turn? Is there a middle ground? Most consumers do not know the difference between a reality they control or a reality that is gradually being controlled for them (nor will many people necessarily care) — will that enable organizations like Google to basically do whatever they want? Should we think of the seamless way we’re embracing AI as a way to bring about a better future, or should we be cautious and approach it with care?

The truth is, there is no single answer to these questions, nor is there one that is necessarily a right or wrong answer. But we need to be asking them, thinking critically about each technological advance and how it will affect us.

A perfect storm of simultaneous technological innovation — the use of graphics cards, creation of custom hardware, the rise of cloud computing, and the growth in computing capabilities — has already made AI one of the most powerful forces in the world. The widespread use of open source Internet-based tools, the explosive growth in data generation, and that ability to rent cloud space or outsource computational resources means that relative costs have come down to earth, have given more people access to AI and ML.

So much data is now generated on a daily basis globally that only gigantic infusions of data that alter AI's effect on society in general are likely to make a difference in the growth of AI going forward. That implies that only the largest, most technically sophisticated firms with the capability to consume and process such volumes of data will benefit from it in a meaningful way in the future.

Enlightened corporations, governments, or, one day, multilateral institutions, will try to govern AI in their particular domains, and it will not be a straightforward process. AI is starting to careen out of control,. .even though some sectors in which AI has the most impact, such as financial services, are already being regulated.

It will take a long time for organizations, governments, and NGOs to work through these questions. Many are straightforward questions about technology, but many others are about what kind of societies we want to live in and what type of values we wish to adopt in the future.

If AI forces us to look ourselves in the mirror and tackle such questions with vigor, transparency, and honesty, then its rise will be doing us a great favor. We would like to believe that the future development of AI will proceed in an orderly and wholly positive fashion, but the race to achieve AI supremacy won’t happen that way. Instead, it is more akin to elbowing one’s way to the front of the line to grab a piece of the pie. Certainly, that is how the leaders in the race (China, the U.S., and their top technology companies being at the top of the list) are already approaching it.


Daniel Wagner is CEO of Country Risk Solutions. Keith Furst is Managing Director of Data Derivatives. They are the co-authors of the new book “AI Supremacy." 


Share This Article