Published: 24th February 2021
Facebook intensifies fight against child abuse content; tightens norms, adds new tools
Advocating its stringent zero-tolerance stance on child sexual exploitation content on its platform, Facebook said using its apps to harm children is "abhorrent and unacceptable"
Facebook is intensifying efforts to crack down on child abuse content on its platform, as the social media giant has tightened norms, improved detection capabilities, and updated tools to prevent sharing of content that victimises children.
Advocating its stringent zero-tolerance stance on child sexual exploitation content on its platform, Facebook said using its apps to harm children is "abhorrent and unacceptable".
"We're announcing new tools we're testing to keep people from sharing content that victimises children and recent improvements we've made to our detection and reporting tools," Antigone Davis, Global Head of Safety at Facebook said in a blogpost.
Facebook said it is developing solutions, including new tools and policies to curb sharing of such content on its platform.
"We've started by testing two new tools one aimed at the potentially malicious searching for this content and another aimed at the non-malicious sharing of this content," Davis explained.
The first is in form of a pop-up that is shown to people who search for terms associated with child exploitation, on its apps.
The pop-up offers ways to seek help from offender diversion organisations and warns about the consequences of viewing illegal content.
"The second is a safety alert that informs people who have shared viral, meme child exploitative content about the harm it can cause and warns that it is against our policies and there are legal consequences for sharing this material," Davis added.
The safety alert is in addition to removing the content, and reporting it to US-headquartered National Center for Missing and Exploited Children (NCMEC)."Accounts that promote this content will be removed. We are using insights from this safety alert to help us identify behavioural signals of those who might be at risk of sharing this material, so we can also educate them on why it is harmful and encourage them not to share it on any surface public or private," Davis wrote.
Facebook has scaled up efforts to detect and remove networks that violate the platform's rules and has updated child safety policies.
"we've updated our child safety policies to clarify that we will remove Facebook profiles, pages, groups and Instagram accounts that are dedicated to sharing otherwise innocent images of children with captions, hashtags or comments containing inappropriate signs of affection or commentary about the children depicted in the image," the blog post further said.
Under the new policy, while the images alone may not break rules, the accompanying text will help the platform assess whether the content sexualizes children and if the associated profile, page, group or account should be removed.
Facebook said it has made it easier to flag content for violating child exploitation policies.
"To do this, we added the option to choose `involves a child' under the `Nudity and Sexual Activity' category of reporting in more places on Facebook and Instagram," it said adding that such reports would be prioritised for review.
Facebook's analysis of 150 accounts it reported to NCMEC for uploading child exploitative content in July and August of 2020 and January 2021, had revealed that an estimated over 75 per cent of the people did not exhibit malicious intent.