meta, which has faced ongoing political pressure amid criticism that its apps don’t take enough measures to ensure the safety of children, announced new steps it’s taking to protect users from “sextortion” and “other forms of intimate image abuse.”
Among the new measures: Instagram is introducing a new nudity-protection feature in direct messages, which blurs images detected as containing nudity and will prompt users “to think twice before sending nude images.” On Instagram, nudity protection will be turned on by default for teens under 18 globally, and the app will also show a notification to adults encouraging them to turn it on.
According to meta, the feature is designed to not only protect people from seeing unwanted nudity in their DMs but also to protect them from scammers who may send nude images to trick people into sending their own images in return. “While people overwhelmingly use DMs to share what they love with their friends, family or favorite creators, sextortion scammers may also use private messages to share or ask for intimate images,” meta said in a blog post Thursday announcing the feature.
With the new feature enabled, when Instagram users receive an image containing nudity, it will be automatically blurred under a warning screen (and they can choose whether or not to view it). In that case, Instagram also will display a message encouraging users not to feel pressure to respond, with an option to block the sender and report the chat. Instagram’s nudity protection feature uses on-device AI technology to analyze whether an image sent in a DM on Instagram contains nudity. which means it will work in end-to-end encrypted chats, according to meta.
The company shared a mock-up of what the Instagram nudity protection feature looks like:
According to meta, the company takes “severe action” when it becomes aware of users engaging in sextortion: It removes their account, takes steps to prevent them from creating new ones and if appropriate reports them to the National Center for Missing and Exploited Children and law enforcement agencies. In addition, the company said, it is developing technology to help identify which accounts may potentially be engaging in sextortion scams based on “a range of signals that could indicate sextortion behavior.” Going forward, Instagram won’t show the “Message” button on a teen’s profile for suspected sextortion accounts (even if they’re already connected) and it is testing ways to make it harder for such accounts to find teen accounts in search results. meta also is testing new pop-up messages for users who may have interacted with an account it has removed for sextortion.
“meta’s proposed device-side safety measures within its encrypted environment is encouraging,” John Shehan, SVP at the National Center for Missing and Exploited Children, said in a statement provided by meta. “We are hopeful these new measures will increase reporting by minors and curb the circulation of online child exploitation.”
The new features come after an extraordinary moment at a Senate hearing in January, when meta CEO Mark Zuckerberg apologized to families of kids who have been the victims of online abuse on the social giant’s platforms. Zuckerberg directly addressed parents attending the hearing who said their children suffered harassment and exploitation on meta’s apps. “I’m sorry for everything that you all have gone through,” Zuckerberg said. “It’s terrible. No one should have to go through the things that your families have suffered.” He said meta is investing more than $20 billion to improve the safety and security systems and personnel across its family of social-media apps.
meta said the latest steps to help protect younger users from unwanted or potentially harmful contact build on its previous work. For example, by default, teens using its apps can’t received DMs from anyone they’re not already connected to and its apps show warning notices to teens who are already in contact with potential scam accounts.
In November 2023, meta announced it was among the founding members of Lantern, a program coordinated by the Tech Coalition that enables technology companies to share “signals about accounts and behaviors that violate their child safety policies.” According to meta, “This industry cooperation is critical, because predators don’t limit themselves to just one platform — and the same is true of sextortion scammers.”