The Oversight Board set up by Facebook issued a decision today calling on the platform to begin an independent assessment of the platform’s role in heightening the risk of violence in Ethiopia, as part of a more specific ruling on a post that made unfounded claims about Tigrayan civilians.
The ruling comes a year into an ongoing civil war between Ethiopian government and rebels in the northern Tigray region of the country, which has created a humanitarian crisis that has left hundreds of thousands of people facing famine-like conditions and driven millions from their homes.
Facebook has come under fire for its role in the Ethiopian conflict, with observers drawing parallels with the company’s role in the genocide of Rohingya Muslims in Myanmar. There, an online campaign led by Myanmar military personnel stoked hatred against the Rohingya minority groups and led to acts of mass murder and ethnic cleansing. In Ethiopia, similar rumors and incitements to violence have been allowed to proliferate, despite numerous Facebook employees reportedly raising the alarm within the company.
Facebook’s lack of action was seemingly acknowledged by the Oversight Board. It recommended that Meta “commission an independent human rights due diligence assessment on how Facebook and Instagram have been used to spread hate speech and unverified rumors that heighten the risk of violence in Ethiopia” and add specific guidance around rumors during war and conflict to its Community Standards.
“In line with the board’s binding decision we have removed the case content,” said Facebook spokesperson Jeffrey Gelman in a statement. “We are reviewing the board’s full decision and recommendations, and per the bylaws, we will respond within 30 days.”
The content at the heart of the decision was a post in Amharic that was uploaded to the platform in July 2021 and claimed without evidence that the Tigray People’s Liberation Front (TPLF) had killed and raped women and children in the Ethiopian Amhara region with the assistance of Tigrayan civilians.
After the post was flagged by automated language detection systems, an initial decision was made by a human moderator to remove it. The user who had posted the content appealed the decision, but a second content moderator confirmed that it violated Facebook’s Community Standards. The user then submitted an appeal to the Oversight Board, which agreed to review the case.
Ultimately, the Board found that the content violated Facebook’s Community Standard on Violence and Incitement, and it confirmed that the decision to remove it was correct. The Board also criticized Meta’s choice to restore the content in the time before a final decision was made.