Instagram has defended its latest safety features aimed at protecting teenagers from sextortion attempts, following criticism that the measures do not go far enough, BBC reports.
The platform’s parent company, Meta, introduced tools on Thursday intended to combat criminals who target teens for explicit images, but some experts argue that more robust solutions are needed.
The new features include blocking screenshots and screen recordings of disappearing images and videos. Meta says these additions are part of an ongoing effort to prevent scammers from tricking teens into sending intimate images. The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) described the measures as a “step in the right direction.”
However, Arturo Béjar, a former Meta employee turned whistleblower, told the BBC that Instagram could implement simpler solutions, such as allowing teens to easily flag suspicious accounts pretending to be peers. Béjar pointed out that by the time sextortion is reported, the damage is often already done.
Meta has responded by emphasizing that its new tools offer clear ways for teens to report inappropriate behavior and harassment. The company noted that its tools were developed based on user feedback and prioritize the reporting of unwanted nude images.
Richard Collard, associate head of child safety online policy at the NSPCC, questioned why similar protections are not being rolled out across Meta’s other platforms, including WhatsApp, where grooming and sextortion also occur frequently. The UK’s communications regulator, Ofcom, has warned that social media companies face potential fines if they fail to keep children safe online.
Sextortion, a form of blackmail in which scammers coerce people into sending explicit material and then demand payment to avoid its public release, has become a rising concern, particularly among teenage boys. According to the UK’s Internet Watch Foundation, 91% of sextortion reports in 2023 involved boys.
The emotional toll of sextortion can be devastating, with victims often facing shame, stress, and isolation. Tragically, some victims have taken their own lives after being targeted, leading to calls from parents for social media companies to take more proactive steps in preventing these crimes.
One such parent, Ros Dowey, whose 16-year-old son Murray died by suicide in 2023 after being targeted on Instagram, has been outspoken in her criticism of Meta. She believes the company is not doing enough to protect young users.
In response to the criticism, Meta’s head of global safety, Antigone Davis, highlighted the company’s ongoing efforts to safeguard teens. She explained that new Instagram campaigns aim to provide both teens and parents with information on spotting sextortion attempts, in addition to the platform’s built-in protections.
Instagram has also introduced measures that will hide teens’ follower lists from potential sextortion accounts and alert them if they are communicating with someone from a different country. However, Béjar expressed concerns that some of these protections could provide a “false sense of security,” as attackers could still photograph screens using separate devices.
Meta has said that its nudity protections are designed to educate teens about the risks without shaming them. The company is also moving under-18 users into stricter Teen Account settings, which require parental supervision for certain changes.
While Meta continues to enhance its safety tools, critics, including Ofcom, argue that it is ultimately the responsibility of tech firms, not parents or children, to ensure online safety.