Meta’s ‘cloud processing’ feature on Facebook may allow its AI to scan and analyse photos from your phone’s camera roll — even if they haven’t been posted. Here’s what users should know.

Meta is once again drawing attention for its approach to user data, this time with a feature that could give its AI systems access to images stored privately on your smartphone. The new opt-in setting, spotted recently by users on Facebook, is part of a feature called “cloud processing” and is raising serious privacy concerns.
As reported by TechCrunch, the feature appears when users attempt to upload a Story, offering what seems like a helpful tool. Facebook describes it as a way to regularly scan and sync your phone’s camera roll to Meta’s cloud services. In return, users would receive AI-generated suggestions like event summaries, creative filters, themed collages, and memory recaps.
While the feature might sound convenient, the deeper concern lies in what Meta is allowed to do with the content. By enabling this setting, users give Meta permission to process the data in those images—photos that may never have been shared publicly. This includes facial recognition, object detection, and even reading metadata such as timestamps and location details, all to refine its Meta AI models.
Meta claims this is entirely optional and users can disable it at any point. The company has also stated that it will delete any cloud-processed images within 30 days if the feature is turned off. However, the lack of transparency about how long data is retained and whether it could be used to train AI in the future has triggered unease among privacy advocates.
This concern isn’t unfounded. Meta previously admitted to using public posts by adults on Facebook and Instagram — going back as far as 2007 — to train its generative AI models. But what qualifies as “public” remains vague, and the company has never clearly defined the age criteria used to separate adults from younger users in its data sets.
Adding to the uncertainty is Meta’s updated AI policy, which took effect on June 23, 2024. These terms don’t explicitly rule out the use of cloud-uploaded, unpublished content for AI training. When questioned by The Verge, Meta said it is not currently using these private photos to train its models, but declined to comment on future plans or clarify what rights the company might hold over those images.
The way this feature is framed—as a creative tool to enhance memories—has drawn criticism for potentially downplaying the privacy implications. While users technically opt-in, many may not fully realize the scope of access being granted to Meta’s AI systems.
For users concerned about their digital privacy, it’s a timely reminder to check app settings and be cautious before accepting new features. As tech companies push deeper into AI, the line between helpful innovation and invasive data collection is becoming increasingly blurred.