Daniel Araya: Tim, you are considered by many to be the guru of Silicon Valley. What shapes your interest in technology? Where do you think technology is taking us?

Tim O'Reilly: I don't think of myself as a guru, but rather just as someone who is good at pattern recognition. I find interesting people, and draw connections between the things that they are involved in. When I see that the prevailing "map" of how things fit together leaves certain things out, I try to redraw the map. For example, in 1997-1998, I realized that the rhetoric of the free software movement almost completely ignored the fact that the internet was built with software that was also given away freely– in fact with a license that was even freer than the one used by the Free Software Foundation. But it wasn't really part of the narrative, which was shaped by a set of political ideals. So I tried to redraw the map, and in the course of that redrawing, helped to get the new "open source" meme off the ground.

In 2004, I was trying to explain the move to the Internet as a platform, and all that entailed, and when Dale Dougherty coined the term Web 2.0. I was able to attach my ideas about the importance of data to future lock-in, cloud based software development, and the way that the internet rather than the PC was becoming the platform for the next generation of applications. In 2009, I spent a lot of time trying to tell a story about how government had to start thinking like a platform provider.

Right now, I'm focused on the on-demand economy, and the economic, business and social issues that it raises. I'm working on a new event called The Next:Economy Summit to pull all those ideas together.

I love helping people see the future more clearly. But I really do this by helping them see the present. I'm the one who popularized the line William Gibson once said in an NPR interview: "The future is here. It's just not evenly distributed yet." I don't try to predict the future, so much as point people to early signs of where it is already unfolding.

You’ve recently made the case that “algorithmic regulation” is preferable to government incompetence. Could you explain you’re thinking on this? How do you see sensor technologies and software reinforcing government oversight?

There's been a revolution in how software is developed and that needs to spill over into other areas of society. We now build applications in a rapid learning cycle– many people call it "build-measure-learn”. This means creating a "minimum viable product," testing people's reactions to it, and then quickly evolving it into something better. Software is continually being adapted to new feedback– a major website like amazon.com or Google may be running hundreds or even thousands of experiments each day.

By contrast, government policy (and the IT systems that support and implement that policy) are developed using a methodology in which someone's idea of the right policy is cast in stone with no opportunity to learn and adapt as it is rolled out. The features that were imagined by the creators of a new government policy are often enshrined in laws or regulations that make them very difficult to change.

This is a big part of why people dislike and distrust government. If you can't adapt and learn from what you do, then a great many of the programs you put together – however well intentioned – will just not work.

But there's a further level of adaptation, besides just learning from your users as you roll out a program. That is dynamically measuring the results. And that's what I called "algorithmic regulation" (though I've since regretted the term because it makes an easy target for those who are hostile to technology). It's a fairly simple idea, really. When governments issue regulations today, they are rule based. You write down and publish some rules, and then enforce them. But what if the rules don't actually produce the results you intended? You keep enforcing them. You keep getting the wrong results, and things get worse.

In a technology world, there are lots of regulatory systems – from the fuel injection system that regulates the fuel-air mixture in an internal combustion engine or an airline autopilot, or a credit-card fraud detection system, or anti-spam software, or Google search quality – all of which take a very different approach. They measure outcomes and adjust to achieve those outcomes.

The closest thing we seem to have in government to this is the role of central banks in attempting to manage inflation and unemployment. They meet quarterly, and adjust the rules (the money supply, interest rates, and the few other knobs and levers they have under their control) in an attempt to achieve the desired result.

But really, all government regulation should work as learning systems. What were the legislators and regulators trying to achieve? Did the rules achieve it? And if they didn't, how should we adjust the rules until they do?

In your view, Big Data will be key to reducing systemic malfeasance within the systems and networks that now sustain information societies. This makes a lot of sense but it also begs the question: What social systems are necessary to manage algorithmic overreach? How do we limit the temptation for exploitation in such a vast ‘panopticon’ of digital management and control?

That's a little bit like saying "what social systems are necessary to manage bureaucratic overreach”? The same foolishness that gave us paper-based rules and regulations but don't achieve their intended goals can translate into algorithmic regulation; cupidity can make even smart people do stupid things. But we do have pretty good evidence of some fairly complex systems that use these techniques extensively while remaining reasonably true to their intended purpose.

Google may tilt the playing field a bit towards their own profit, but mostly they do serve their customers. As long as there is competition, markets do a reasonable job of making sure that companies serve their users. But when monopolies or oligopolies happen, watch out!  It's precisely for various kinds of market failure that we need government intervention.

One of the big problems today, however, is that government can't keep up. Just look at financial regulation around the 2008 crash. Financiers were building complex algorithms that even they didn't understand. Meanwhile government regulators were using ponderous 19th and 20th century regulatory mechanisms to oversee this activity. It was a classic example of "bringing a knife to a gunfight."

Financial regulation today needs to have many of the same qualities as anti-spam software, credit card fraud detection, or Google search quality. And instead, it has the same qualities as so many other government regulatory processes: out of date rules, spotty inspection, and weak enforcement.

Beyond the simple transmission of public services, could these tools and resources empower a new generation of citizen agency? I mean, what is the potential of software in government to provide platforms for enhancing democracy?

Possibly. But you have to understand that every new technology is hailed in ways that later seem rather quaint. Moving pictures and the telephone were going to bring world peace– after all, how could you go to war with people who were now brought so much closer together?  Every tool simply amplifies our own nature – both our greatness and our failings as a species. So yes, these things could empower a new generation of citizen agency and empower democracy. But will they? That's entirely up to us.

I imagine most Americans would be happy enough with a highly functioning government provided algorithmic regulation was cost effective. Let us assume that computer algorithms begin to replace stratified layers of bureaucracy. Aren’t there pretty significant dangers of opening the government to much higher levels of cyber terrorism?

My friend and early mentor, science-fiction writer Frank Herbert, once said, wryly, "Give us this day our daily devil." Cyber terrorism is today's devil. And yes, it's a real problem. But frankly, our society is already so dependent on computers and algorithms that unless we get far more serious about security in the systems we use, we're pretty much screwed.  Since we live in a technological world, let's get good at it!  I'd bet that Goldman Sachs and Google are both better off in this regard than the government, because they live and breathe the technology. If government were as good at this stuff as the best private firms, we'd be much better off.

How do you imagine the functioning of government over the long-term? Is algorithmic regulation simply part and parcel of a larger shift to labor automation across both the public and private sectors?

I don't like the term "automation." It emphasizes the wrong things about the relationship between humans and their tools. The best tools don't make us more automatic. They augment us, making us stronger and more capable.

We've made a massive mistake with technology over the past three or four decades, using it to automate jobs and replace workers, rather than using it to augment workers and give them amazing new capabilities to do things that couldn't be done before.

Of course we have augmented some professions. One of my favorite examples of this was when I got my laser eye surgery a decade ago (which took away forty-plus years of extreme nearsightedness). I remember when the surgeon gave me instructions: look at the light. "What will happen if I look away?" I asked. "Oh, the laser will stop," she replied. "But the operation will just take longer."  This is a surgery that no human surgeon could do unassisted.

So, how do I imagine the functioning of government over the long term? It's got to get better at implementing the programs it imagines, testing the outcomes, and refining the program till it achieves its objectives. This is the kind of thing we do at Code for America– helping governments apply lessons from technology so they can build programs that actually work. And once these systems are up and running, they need to be managed with real time feedback loops, so that as the world changes, so do they.

"Algorithmic regulation" takes nothing away from human governance. In an ideal world it will make our regulatory systems more effective and responsive. But in order to get to that ideal world, we need to have government officials who know how technology works, and who can take the lessons of the modern world into government.

Daniel Araya is a researcher and advisor to government with a special interest in education, technological innovation and public policy. His newest books include: Augmented Intelligence (2016), and Smart Cities as Democratic Ecologies (2015). He has a doctorate from the University of Illinois at Urbana-Champaign and is an alumnus of Singularity University’s graduate program in Silicon Valley.

He can be reached here: www.danielaraya.com and here: @danielarayaXY.

This interview has been edited for clarity and length.



Share This Article