For years, Hany Farid has been warning Congress, the media, and anyone else who would listen that society is losing the battle against disinformation online.

With the rise of sophisticated, hyper-realistic deepfakes and other forms of manipulated media and the approaching 2020 presidential election, his warnings seem more prescient than ever. Farid, a professor at the University of California, Berkeley's electrical engineering & computer science program, is among the foremost experts in digital forensics — the practice of developing technology that can identify doctored media.

On Thursday, Farid's going to give a keynote address on digital forensics at the Spark + AI Summit. Futurism caught up with him in advance to learn what he sees as the greatest digital threats and how to combat them. Our conversation, lightly condensed and edited for clarity, is below.

Futurism: Dr. Farid, I'm so glad we could find time to talk. For years, you've been one of the leading experts sounding the alarm over digital disinformation, and I'd love to hear what you've been up to lately. In this seemingly endless battle against deepfakes and other misleading media, what's caught your eye most recently?

Hany Farid: Some on the right think there's this overarching bias against conservatives in tech. I don't think there's any evidence for that. But six months ago, when I testified, it was primarily about the elections, the 2020 election. That's still a concern, but it's not even number one or number two. It was COVID and Black Lives Matter. And then number three was even voter suppression.

It's been a bizarre few months. At the turn of the year, I thought "this is going to be the year about preserving our elections." But this year got derailed. It's really been about COVID misinformation, which can be deadly. There are people who believe that you can kill the virus by blowing a hairdryer down your throat, by drinking silver, by drinking bleach. You can see a direct connection between the nonsense online and people's actions.

It's not just scammers and trolls and Russian influencers. It's literally the President of the United States with this incredible megaphone. And then when Twitter puts a warning label on one tweet, he signs an executive order.

So we've been focusing primarily on three things. COVID-19 and how it's spreading. We continue to be concerned about the election. But most recently, the Black Lives Matter protests: how technology is being weaponized against people of color, how it's being used to propagate this narrative that Black Lives Matter is violent, how it's being used to stifle the voices of people of color online.

People talk about freedom of speech, but what happens when your freedom of speech infringes on 100 other voices? They don't want to consider that.

The one thing I will say was almost universal was the dislike of Mark Zuckerberg and Facebook. Everyone agrees that he's a wanker. It's amazing how that changed! He was gonna run for president, remember?

Oh yeah, his Iowa tour. We really avoided the worst possible timeline there.

And now he's persona non grata. Now everyone agrees he's a greedy power-hungry creep.

Anyway, I continue to think that deepfakes are a real threat. But the reality is around COVID, we're not dealing with deepfakes. We're dealing with people tweeting, the President tweeting. Good old-fashioned fake news; there's nothing sophisticated.

Deepfakes are not a fundamentally new concept. They're part of an evolving landscape of misinformation. I think the way deepfakes are going to manifest themselves – the threat is not so much the day-to-day, low-level fake news, the conspiracies, lying, cheating, stealing. It's going to be the one video two days before an election that goes viral. Or it's going to be nonconsensual porn, which is one of the main ways women are victimized by deepfakes.

The irony here, and I don't know if it's good news or bad news, is you don't really need deepfakes. You have this bazooka, but you can use a spitball to do the same thing. So why not just use the spitball?

I think the stakes have never been higher for the country and the world. The Trump campaign, without being partisan about it, they play dirty. Their campaign manager named their re-election campaign the "Death Star." We know what the game is going to be like.

So I want to circle back to some of the things you brought up, but before we do I'd like to focus on the technological side of things: more on deepfakes and other technologies like that, even if they're not the greatest concern right now. Almost exactly a year ago, you warned that the scientists and engineers working to detect deepfakes and similar disinformation were hopelessly "outgunned" by the people creating them. Is that still true?

Yeah, absolutely. We're outgunned in a couple of different ways. First is the research community. The computer vision, the computer graphics, the machine learning communities are huge. And the forensics community is tiny. There are just fewer of us, that's all it is. And that's not to say all the people working in computer graphics or computer vision are my nemesis. They're just developing the technology that allows this to happen.

Part two of being outgunned is the tech companies because tech companies control the flow of information. Even if I develop really, really good deepfake detection, I need Mark Zuckerberg to stop saying "Well, I don’t want to be the arbiter of truth."

There's this disconnect between Facebook saying "we're going to spend all this money to detect deepfakes" on the research side. Then on the other side, on the policy side, they say "We don't want to be the arbiters of truth." So which is it? Why are you detecting these things if you're not interested in stopping it?

I'm glad you brought that up because I wanted to ask you about this ongoing debate over whether scientists should be responsible for how their work is used. To use the classic example of a deepfake of President Trump announcing a nuclear strike: should the engineer building the algorithm that makes that possible refuse to publish their work over ethical considerations? Or do they need to focus on navigating the publish-or-perish system and leave it up to someone else to regulate how it's used?

First of all, this is not limited in this field. Every scientific discipline, you always have to think about what you're doing. Physicists in particular have been thinking about this for a long time, given their history in developing atomic weapons.

Computer science is a little bit of a younger field. And it tends to suffer from a techno-utopian worldview — this idea that technology can solve all our problems, which we're seeing lately is not the case.

Last week, the week before, Microsoft, IBM, a couple companies said they will no longer license facial recognition software to law enforcement because of all the ongoing issues. Should people developing facial recognition think about bias issues, misuse issues? Absolutely! But for a long time they didn't give it the time of day.

I think the days of the computer scientist going "Hey our hands are clean, you can’t blame us for how people are using our technology" are over. I'm not saying don't innovate, but you have to have the conversation. You have to think about the ethics of developing technology. These can't be afterthoughts. Part of the problem is technologists aren't well-trained to think about these issues.

This is really where you see the problem of diversity in the tech sector. If it's a bunch of White dudes, there is no diversity of ideas. With so few women, so few Black people and people of color in tech, you're not getting that richness of debate that you might get with a diversity of views. For those who say diversity doesn’t matter, I would say part of the reason we have racial bias in our algorithms is because of this. When they train those datasets, there are no Black people in the room — the algorithm has never seen a Black person before.

The answer is yes — we should take responsibility. Not as an afterthought, but as a forethought, you have to think "Hey this is not cool, we're not going to build it." Or "We're going to build safeguards."

The way it works now is "Let's build a piece of technology, let's put it on GitHub, and let's see what happens." But when you put software online, some batshit crazy things happen. And when you tell engineers this — that they need to think about how their work is used, they disagree. I've had people scream in my face about it. But I think they'll be proven wrong.

I promise I only have one more question on the bazooka instead of the spitball, then we can stop talking about years-old tech. When you gave that warning, it was right around the time that all these websites like "thispersondoesnotexist.com" that showcased hyper-realistic generative adversarial networks started to pop up. Does tech like this have you concerned? Or is it the same old problem with some new gloss?

It's part of the continuum, but here's the real threat of thispersondoesnotexist: After it was launched, they went and fixed all the errors, removed all the artifacts. So now I can carpet-bomb the internet with fake profiles. So the scale at which you can operate is scary. My favorite example was this teen who created a fake Congressional candidate using a thispersondoesnotexist photo — and he got verified by Twitter and all these levels.

But I was a victim of this thinking. I used to think, "Okay, you can create a version of a person who doesn’t exist. But it's not a face swap video, it's not a recording." So I thought "well, it's not the biggest problem." But it lends so much credibility to these fake accounts, because fake accounts used to not have pictures. We're absolutely seeing the creation of fake accounts on Twitter, on LinkedIn. And with added legitimacy, that's a real concern as it far as the good old-fashioned problem of fake news.

Well, I suppose this is the million-dollar question, but at a high level, how do we fix it? Is it up to politicians? Tech companies? Engineers? Who should assume responsibility?

It is the question, and the answer is "yes" to all the above. It's a complex problem. And it's also important for us in the public to stop being so gullible, so easy to manipulate, so fast to assume the worst in the people we disagree with.

But it starts at the top. It needs reform. The idea that there's no liability for what's published — no other industry has this freedom. We need to hold companies liable, and then there will be more accountability because they don’t want to be sued.

And then technologists need to think "should you really be making this?" "Should you make it so easy for anyone to download and use?"

And number three is the public. You've got us, the knuckleheads in the trenches, liking and retweeting and sharing, amplifying misinformation again and again. We've got to stop being so goddamn stupid, frankly. Or maybe gullible is a better word.

Well, it's interesting that you bring up reform after talking about Trump's executive order earlier. It sounds like a similar argument about Section 230, sort of like that ClickHole headline: "The Worst Person You Know Just Made A Great Point." I assume that there are many differences between what you're proposing and what Trump wants and different motivations behind them, but could you go into that more?

It's really unfortunate that he waded in at the time he did. We were starting to get some real serious discussion. We've been talking about this for years with thoughtful people from across the spectrum. He stepped in and really politicized it. So now any conversation about 230 now is really reactionary, back to where it started.

There are two parts of Section 230. One part, (c)(2), says that "if you take down content, we won't hold you liable for not taking down content that you miss." It's what lets a platform, for instance, remove a post from the President that incites violence and protects them from the government stepping in. That should remain.

What we need to change is (c)(1), which covers the duty of care. It says "do what you want, we don't care." It abdicates platforms from being responsible for what users post. You can't just build a car, have it explode, and be like "Cars are really complicated. We build a lot of cars, we're not responsible."

So what we are saying is there has to be a reasonable duty of care, and that will realign the tech industry with every other industry in the world. If you have products that you are willingly allowing to be used for harm, that needs to change. That's not what Trump's executive order was about. Trump's executive order was "I don’t like you calling me out for lying." And that's what we need to protect.

I'd like to circle back to what you brought up toward the beginning about the new, higher-priority issues. One of the things that's most interesting to me about digital disinformation is how it feeds into and amplifies existing worldviews. Whether it's because of social media or news feed algorithms promoting bad info or deliberate disinformation campaigns, things like wearing masks during the pandemic or support for the Black Lives Matter protests have become contentious political issues within our culture war. Some are convinced that both of them are hoaxes. It feels almost reductive to ask such a simple question about this, but how did this happen? What do we do about it, and how do we, as a society, reckon with it?

It turns out there's a really simple answer. The answer is the underlying business model of social media. They are, at the end of the day, in the attention-grabbing business, so they can extract your personal data and deliver ads. It turns out, and I don't think this should be too surprising, that the most outrageous, sensational, conspiratorial content is the most attractive.

The problem is the algorithm. So what does Facebook want to do? They A/B test. If my newsfeed — I happen to be on the left side of the political spectrum — if it's a whole bunch about Trump, I'm out. I'm done. So they give me a whole bunch of things that align with my worldview. The fact is that if engagement is what you're optimizing for, the conspiratorial, the divisive, works.

So how do we change that? If you think about the time Facebook and Twitter came out, 10-15 years ago, it was a time we weren't really comfortable with commerce online. I'm a technologist and even when Amazon wanted my credit card I said "I don't want to put that online." So Facebook needed to find a way to monetize in spite of that.

But now, the landscape has shifted. We're more comfortable with online commerce. So I think that maybe there's a better business model out there. But then there's Facebook and Google, you have these two gorillas in the room sucking all the oxygen out of the debate.

The irony here is that Google will say "government regulation is bad for business" but the reason Google is here is because the Department of Justice came in, slapped Microsoft for forcing Internet Explorer down everyone's throat, and left oxygen for Google.

So you're giving this keynote speech about digital image forensics. Before I let you go, is there anything else you want to add — anything you're planning on addressing — that I might not have known to ask about?

I wrote the keynote and even recorded it before the Black Lives Matter movement really started gaining steam and we understood the breadth and the depth of this.

I have been really troubled by how this has impacted the Black Lives Matter movement. And there are real civil rights issues here. There are many, many concerns here about how technology is being weaponized against Black people. When trillion-dollar companies have their knee on the neck of Black people online, we need to hold them accountable.

Reddit said "We stand with Black lives" but they're profiting off of the KKK, the racists, the gay-haters. You can't give a platform for this vitriol and say "Black lives matter." You can't have it both ways.

And we saw this with #MeToo. To me, this feels even a little bit bigger, because of the timing with COVID. But you saw this back then too. A company will say "We stand with women," but when you look at the board, the C-suite, the record of sexual harassment within the company, it falls apart.

Yeah, there are a lot of upsettingly hollow statements out there. But what about companies like Amazon, which said it would ban police from using facial recognition for a year? That's tangible, but then it turned out that the ban was more of a recommendation — it was less harsh. Is that any improvement?

I'm pretty cynical having been in this space for a while. I don't think it was empty. They did something, right? Now, what happens in a year? It depends on what they're going to do in the interim. Are they going to refine the technology and address the issues? Or is it just "Get off our back, everyone's going to forget about this?" If that's the case, then that's empty.

And then there's IBM that said it stopped providing facial recognition altogether — it wasn't a moratorium. I think these gestures have the potential to be empty, but we need to hold them accountable a year from now and see what they do.

More on Dr. Farid: DARPA Spent $68 Million on Technology to Spot Deepfakes


Share This Article