So, Yeah: Facebook Is Mining Your Instagram Photos, Too
You probably shouldn't be surprised. But the reason why is kind of crazy.
Facebook is so ubiquitous it’s sometimes easy to forget that it’s reach spans much further than your likes, comments, and groups on its main platform.
Reminder: the tech giant owns Instagram and WhatsApp, too. And if you’re a person that maybe doesn’t like a company knowing every single thing about you, then, surprise! That’s going to keep biting you in the ass for a while.
At Facebook’s F8 developer’s conference, CTO Mike Schroepfer cheerfully revealed that the goliath media company has been scanning billions of photos posted to Instagram. The purpose: to teach artificial intelligence to identify “offensive content, spam, hate speech, fake accounts, fake news, clickbait, and more,” automating a bit of what human content moderators already do.
In the aftermath of the Cambridge Analytica data scandal, and a thrown presidential election, some people are flipping out about this perceived violation of trust. UK tabloid the Daily Mail warned users “you might want to think twice before you upload your next selfie to Instagram.” ABC News in Australia added that “even your selfies and brunch photos aren’t safe from the long arm of Facebook.”
And who could blame them? It’s natural not to trust Facebook at this point. Even a redundant dating platform won’t distract us.
But, honestly, the news shouldn’t be all that surprising.
Under Facebook’s ownership, Instagram’s publicly-available data could be used for pretty much anything. Instagram’s terms of service stipulates that while the service “does not claim ownership of any Content that you post on or through the Service,” it does reserve full “royalty-free, transferable, […] worldwide license to use the Content that you post on or through the Service” — that’s pretty damn broad, and gives them a lot of leeway to do pretty much whatever they want with your photos, at least internally.
What does leave a bad taste in the mouth, however, is the lack of transparency. Were Instagram users informed their images were being harvested to train an AI? Do they really expect us to read the 4,890 words of terms of service before posting any pictures of our Sunday brunch?
It makes sense that it’s turning to AI to try to process some of the less-then-desirable content that shows up across its platforms. Over 2 billion users means a staggering amount of posts, pictures — and unfortunately, hate speech, fake news, and in rare instances, terrorist threats. Mo’ users, mo’ problems. Right, Zuck?
But until that AI tool is ready, humans are going to have to do the tedious job. Facebook is planning to double its “Safety and Security staff” to 20,000 as Zuckerberg testified to Congress. That’s a pitifully low number, when considering Facebook’s Sisyphean task of weeding through all that data. But taking advantage of the billions of public Instagram images — our well-groomed garden of #nofilter grams — to do so, with no warning? Well, let’s just say users may be more likely to remember that Facebook, with all its privacy woes, still does own Instagram.
As a Futurism reader, we invite you join the Singularity Global Community, our parent company’s forum to discuss futuristic science & technology with like-minded people from all over the world. It’s free to join, sign up now!