Pinterest has updated its privacy policy to reflect its use of platform user data and images to train AI tools.

A new clause, published this week on the company's website, outlines that Pinterest will use its patrons' "information to train, develop and improve our technology such as our machine learning models, regardless of when Pins were posted." In other words, it seems that any piece of content, published at any point in the social media site's long history — it's been around since 2010 — is subject to being fed into an AI model.

In the update, Pinterest claims its goal in training AI is to "improve the products and services of our family of companies and offer new features." Pinterest has promoted tools like a feature that lets users search by body type and its AI-powered ad suite, which according to Pinterest's most recent earnings report has boosted ad spending on the platform. The company is also building a text-to-image "foundational" AI model, dubbed Pinterest Canvas, which it says is designed for "enhancing existing images and products on the platform."

The platform has stressed that there is an opt-out button for the AI training, and says it doesn't train its models on data from minor users.

Pinterest is the latest tech company to position itself to swallow user data for AI training, following suit with Meta, which owns Facebook and Instagram, Reddit, and Google, among others. The immediate applications vary — Meta and Google, for instance, are funnelling data into systems like massive large language models, while Reddit has sought to monetize its data troves by selling them to AI makers — but communicating on the web increasingly means your data is being used to train AI tools.

Soon after we reached out to Pinterest with questions about the update, we were contacted by a spokesperson who insisted that the update wasn't newsworthy because the update simply codifies things Pinterest was already doing. Later, the company provided us with an emailed statement.

"Nothing has changed about our use of user data to train Pinterest Canvas, our GenAI model," read the statement. "Users can easily opt out of this use of their data by adjusting their profile settings."

Pinterest was already training its AI tools with user data, as the company touches on in this Medium post about Canvas, but the practice is now codified in the platform's terms of service.

Pinterest's update comes amid brewing controversy over the impact that generative AI-created content has already had on its platform.

As Futurism first reported, top Pinterest search results across topics and categories — food, fashion and beauty, home decor, art, architecture, and more — are overrun with undisclosed AI slop. This content frequently links back to similarly synthetic websites, helmed by fake authors and comprised of AI-generated text and imagery, created to cash in on revenue from display ads.

The spammers behind the AI onslop have discussed their tactics on YouTube and other platforms, as we reported, explaining that the goal is to publish cheap AI-spun pins en masse to Pinterest for clicks. Actual Pinterest users, meanwhile, have expressed their frustration, lamenting the need to sift through piles of AI in order to find real, human-made content, and bloggers and other creators are raising alarm bells over the danger that unchecked viral AI garbage-for-profit poses to their businesses.

In response to our reporting, Pinterest insisted that AI-generated material comprises only a small amount of its content library overall, and said it was working on tools to label content as AI-generated. The company reiterated the latter claim in a new section about AI in its Help Center, which appears to have been published in tandem with new policy update.

"We're thoughtfully exploring Generative AI (or GenAI) technology that drives innovation and creativity," reads the webpage, which also discusses the use of AI for content moderation and other more traditional social media use cases.

In a separate section, it adds that the "labeling of AI-generated or modified content helps to provide relevant context about the content people see on Pinterest."

The labels, it says, will only be visible when a user goes to the trouble of tapping on a pin for a closer look.

"We're working on ways to expand capabilities," the page continues, "to better identify GenAI content in the future through additional technologies."

Following the privacy policy update, some Pinterest users reflected on the changes in the r/Pinterest subreddit.

"And here I was hoping someday there'd be something done to slow the waves of AI slop on the site," reads one comment. "The fact this option is default on is really — eugh."

"I saw that it was automatic opt in," added another commenter. "I hate that so many platforms are doing that! It's so underhanded. It should've been the user's choice to check that, not theirs."

More on Pinterest and AI: Pinterest Is Being Strangled by AI Slop


Share This Article