Instagram Beefs Up Child Protection Measures
Social media giant Meta said on Tuesday it was rolling out a slew of measures to boost the safety of young users on its Instagram platform, the latest firm to address the issue.
Campaigners have long criticised tech giants for failing to protect teenagers from harmful content, and the popularity of Instagram with young people has placed it firmly in the firing line.
Meta, which also owns Facebook and WhatsApp, said parents and guardians would be able to set time limits on children’s scrolling on Instagram.
And young users would now see nudges encouraging them to look at other subjects if they are spending too much time looking at content about a single topic.
“It is crucial for us to develop tools that respect the privacy and autonomy of young people while involving parents in the experience,” said Clotilde Briend of Meta during a media briefing.
Instagram was rocked last year by revelations from whistleblower Frances Haugen that suggested executives were aware the platform could harm the mental health of young users, particularly teenage girls.
Meta has consistently denied the claims but has since faced a series of grillings in US Congress and suggestions that regulation could be on the way.
Other apps, including video-sharing platform TikTok, have also been criticised over fears young people were finding it hard to tear themselves away from the content.
Last week, TikTok announced young people would get nudges to remind them to take a break from scrolling — similar to an Instagram feature that has already been rolled out.
On Tuesday, Meta also announced new measures for its virtual reality headsets.
Parents and guardians will be able to block apps, view what their child is looking at on another device and see how long their child is spending with their headset on