Facebook will expand its current harassment policies to further protect users from abuse and harmful content on the platform.
On Wednesday, the company announced it would ban content that degrades or sexualizes public figures, such as elected officials, celebrities, activists, and journalists. This builds on the company's current policies that exist to protect ordinary users in the same way.
Facebook said in its announcement that it would remove "severe sexualizing content" and some other types of content used to sexually harass these public figures.
The company said, "Because what is 'unwanted' can be subjective, we'll rely on additional context from the individual experiencing the abuse to take action. We made these changes because attacks like these can weaponize a public figure's appearance, which is unnecessary and often not related to the work these public figures represent."
Under its new policy, Facebook will also remove coordinated mass intimidation and harassment that come from multiple users. Those types of targeted harassment campaigns are used to attack government dissidents, the company said.
"We will also remove objectionable content that is considered mass harassment towards any individual on personal surfaces, such as direct messages in inbox or comments on personal profiles or posts," Facebook said.
To combat those assaults, the social media platform will remove state-linked and state-sponsored organizations using private groups to coordinate mass posting on profiles of government critics.
For example, Manal al-Sharif, a well-known activist who has pushed for women to be able to drive in Saudi Arabia, said in 2018 that she had to delete Twitter and Facebook due to harassment she faced from "pro-government mobs," according to The Guardian.
Facebook has recently faced criticism in the wake of whistleblower Frances Haugen's interview and Congressional testimony. In addition to Haugen's testimony, major reporting by The Wall Street Journal, which used leaked collection documents, suggested that Facebook hid research about its platform's negative effects on mental health in teenagers.
The company has said that research was taken out of context.
Concerns and allegations still remain over the site's inability or reluctance to address misinformation.
Haugen has testified that the company stokes division among users by allowing disinformation on the platform to go unchecked.
She has shared her opinion that Facebook's algorithms could be stoking tensions and fanning ethnic violence, particularly in Ethiopia. The country's government and Tigray rebels have been engaged in a civil war.
Hundreds of thousands of people are facing famine because of the conflict between the Ethiopian government and Tigray rebels. Zecharias Zelalem, a journalist covering the region and its conflict, recently told NPR that "prominent Facebook posters would post unverified, often inflammatory posts or rhetoric that would then go on to incite mob violence, ethnic clashes, crackdowns on independent press or outspoken voices."
"My fear is that without action, divisive and extremist behaviors we see today are only the beginning," Haugen told Congress. "What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it."
Editor's note: Facebook is among NPR's financial supporters.
Copyright 2021 NPR. To see more, visit https://www.npr.org.