Meta is revisiting facial recognition technology, three years after it discontinued the software amid privacy and regulatory backlash. The social media giant announced on Tuesday that it is testing the technology as part of a new effort to tackle "celeb bait" scams on Facebook.
The trial involves enrolling about 50,000 public figures, whose profile photos will be automatically compared with images found in suspected scam advertisements. If a match is found and the ads are determined to be fraudulent, Meta will block them. Celebrities involved in the trial will be notified and given the option to opt out, Meta said.
This move comes as Meta attempts to balance addressing rising scam concerns with avoiding further controversy over data privacy. According to Monika Bickert, Meta's vice president of content policy, the company aims to protect public figures whose images are used by scam ads.
"The idea here is to offer as much protection as possible," Bickert explained, adding that celebrities could opt out if they chose to.
The trial is set to launch globally from December, excluding regions where Meta does not have regulatory clearance, such as the European Union, Britain, South Korea, and the U.S. states of Texas and Illinois.
In 2021, Meta had shut down its facial recognition system and deleted data associated with one billion users, citing "growing societal concerns." In August this year, the company was ordered to pay $1.4 billion to Texas to settle a lawsuit over the alleged illegal collection of biometric data. The current test marks an attempt to use similar technology to combat celebrity-related scams while remaining sensitive to privacy issues.
Meta continues to face lawsuits accusing it of not doing enough to prevent these "celeb bait" scams, where images of famous individuals – often generated by AI – are used to lure users into fraudulent investment schemes.