Facebook revealed on 11th August that it had removed seven million posts in the second quarter for sharing false information about the coronavirus. The deleted posts included fake preventative measures and exaggerated cures for the disease. An additional 98 million posts were labelled false by fact-checkers, although the content didn’t qualify for outright removal.
The statistics were released alongside the company’s sixth Community Standards Enforcement Report, a quarterly study that details content takedowns on the platform. This report was introduced in 2018 in response to accusations that Facebook was failing to police content on its platform.
When the virus outbreak began, Facebook promised to boost efforts to control the spread of misinformation. The company took steps to promote credible health information and also debunk common myths about the coronavirus.
Despite these steps, the pandemic has seriously tested Facebook’s ability to monitor content on its platform. In May, a video claiming that masks make people sick and that coronavirus was created in a lab racked up millions of views before Facebook removed it.
Part of the problem is Facebook’s reduced workforce. Due to COVID-19, the company has been relying less heavily on content reviewers and more heavily on automation to monitor content on the platform.
In some respects, the use of technology has proven successful. For example, the company has been able to improve its detection of hate speech on its platform. Facebook reported that it removed about 22.5 million posts containing hate speech in the second quarter, compared to 9.6 million in the first quarter. The company also deleted 8.7 million posts connected to “terrorist” organisations, up from 6.3 million.
However, many posts relating to abuse and self-harm have gone undetected by the technology. For example, child sexual abuse images surged during the pandemic due to Facebook’s limited capacity to find and remove them.