Search
Close this search box.
Archives

The Habesha: Latest Ethiopian News, Analysis and Articles

English French German Hebrew Swedish Spanish Italian Arabic Dutch

Facebook’s policing of vitriol is even more lackluster in Ethiopia, critics say

Aisha Gani
October 17, 2021

A float depicting hate speech at Facebook shaped like the monster from the movie Alien in Düsseldorf, Germany. Photograph: Sascha Steinbach/EPA

On a cloudy evening in Nairobi, Berhan Taye is scrolling through a spreadsheet in which she has helped document more than 140 Facebook posts from Ethiopia that contain hate speech. There are videos of child abuse, texts of hate speech against different ethnic groups, and hours-long live streams inciting hatred. These posts breach Facebook community guidelines in any context. Yet for Taye and her colleagues, this is what Facebook’s news feed has looked like for years in Ethiopia.

Because there aren’t enough content moderators focused on Ethiopia, it has been up to Taye, an independent researcher looking at technology’s impact on civil society, and a team of grassroots volunteers to collect and then report misinformation and hate speech to Facebook.

It’s dangerous work – people who put out the hate speech are organized – so volunteers are anonymous. They spend hours watching violent live streams and collating hateful content. It takes a toll on their mental health.

Once they send their report over email it can take a week for Facebook to respond – if they’re lucky – and sometimes 70% of the content will be removed, according to Taye. In some situations, the big tech company has come back to the activists requesting translations of the content. “Over and over again, we’re seeing they’re actively failing every time,” Taye says.

“They’re not willing to invest in human rights, invest in resources, and in languages that are not making them money.”

Facebook disputes that it does not crack down on abuse with the same intensity outside the US, saying it spends $13bn globally to tackle this in work that involves 15,000 people across dozens of languages.

Researchers like Taye say that’s not enough.

In June, Facebook reported it had removed a network of fake accounts in Ethiopia targeting domestic users ahead of the country’s elections.

Taye, however, said she has been in conversations with Facebook since 2018 about the situation in Ethiopia, a country where there has been ethnic cleansing, where armed conflict is escalating, and where Facebook is a crucial platform for information.

Now, Taye is calling for Facebook to release any human rights impact assessment reports it may hold on Ethiopia.

Like many digital activists around the world, Taye and her colleagues have been urging Facebook for years to take seriously how its algorithm incites misinformation, hate-speech, and ethnic violence in non-English speaking regions.

It’s an issue Facebook whistleblower Frances Haugen highlighted in her testimony to US congress at the beginning of October, when she said Facebook’s system of content ranking had led to the spread of misinformation and hate speech.

Content ranking works by using machine-learning models to remove or demote bad content, but it is only trained for certain types of content. Haugen said Facebook knows: “Engagement-based ranking is dangerous without integrity and security systems.”

She added the problem was far worse in regions where posts are in languages other than English. She said the “strategy of focusing on language-specific, content-specific systems for AI to save us is doomed to fail”.

For digital rights activists, Haugen’s testimony in Congress came as no surprise. “We’ve been the victims of that,” Taye says. “It’s good for Americans to know. But we’ve been saying this. The first thing you see when you open your Facebook is the most heinous content.

“What do they think the Rohingya were saying? What was [Philippines journalist] Maria Ressa saying? Most Facebook users are not in America and Europe,” she says.

Haugen was the whistleblower who gathered documents that formed the Wall Street Journal’s Facebook Files investigation. The WSJ reported that one internal document revealed that Facebook’s work on misinformation in 2020 included 3.2m hours of searches, but only 13% of this was outside the US; more than 90% of Facebook users are outside the US. Facebook disputes the 13% statistic, which it says reflects just one program of many.

Networks of digital rights and human rights activists around the world have been pressing Facebook to release their reports and to run risk-assessments before they enter markets.

Eliška Pírková, the global freedom of expression lead of Access Now, a human rights organisation, called for human rights-centric regulation of online platforms such as Facebook. She said Facebook users needed to be protected by default from dark patterns that are a result of interface design that nudges users towards certain behaviour.

Haugen’s testimony confirmed what civil society already knew, Pírková said, and revealed the company’s “inherent opacity and unwillingness to disclose information and how algorithms operate”.

“Civil society shouldn’t have to hold Facebook to account,” she said, adding engagement with the company had not been very meaningful and there had been no follow-up.

She pointed to Facebook’s moderation process during the events of May 2021, when Palestinians were evicted from Sheikh Jarrah in Jerusalem and during the 11-day bombardment of Gaza, which led to mob violence against Palestinians incited on WhatsApp groups, while pro-Palestine posts were removed from Facebook platforms.

If Facebook did not learn lessons from the past, it would be countries in the global south and historically oppressed and marginalized groups that would “pay the highest price for our mistakes”, she said.

Myanmar is an often cited case study when it comes to the catastrophic impact of disinformation and hate speech shared on Facebook. Myanmar became a “textbook example of ethnic cleansing” according to the UN, where in August 2017 more than 700,000 Rohingya were forced to flee violence in Rakhine state.

The country has seen a rapid rise in Facebook users: there were 1.2 million Facebook users in Myanmar in 2014, and by January 2019 there were 21 million.. By January 2021 there was 23.65 million users, about 40% of the population.

Victoire Rio, a digital rights researcher focusing on Myanmar, said Haugen’s testimony shone a spotlight on the discrepancies between what Facebook does in the US and the “lack of action and intervention” in the rest of the world.

At the beginning of Facebook’s presence in Myanmar, there were only two Burmese moderators at Facebook. Now there are 120, according to Rio.

“The amount of investment that’s going into trying to clean up and sanitize the content that gets through in the US is just not there in other parts,” Rio said. “But it took a genocide, it took the UN calling them out on it, and took the US Congress calling them out on it, the western press calling them out on it for, for us to finally be heard,” she said.

In a statement, a Facebook spokesperson said: “Our track record shows that we crack down on abuse outside the US with the same intensity that we apply to it within the US. We have invested $13bn globally to tackle this challenge and have 15,000 people reviewing content outside the US, covering more than 50 languages and working in more than 20 locations across the world.

“Our third-party fact-checking program includes over 80 partners who review content in more than 60 languages, with over 70 of those partners located outside of the US. We have also taken down over 150 networks seeking to manipulate public debate since 2017, and they have originated in over 50 countries, with the majority coming from or focused outside of the US.”

… we have a small favour to ask. Tens of millions have placed their trust in the Guardian’s high-impact journalism since we started publishing 200 years ago, turning to us in moments of crisis, uncertainty, solidarity and hope. More than 1.5 million readers, from 180 countries, have recently taken the step to support us financially – keeping us open to all, and fiercely independent.

With no shareholders or billionaire owner, we can set our own agenda and provide trustworthy journalism that’s free from commercial and political influence, offering a counterweight to the spread of misinformation. When it’s never mattered more, we can investigate and challenge without fear or favour.

Unlike many others, Guardian journalism is available for everyone to read, regardless of what they can afford to pay. We do this because we believe in information equality. Greater numbers of people can keep track of global events, understand their impact on people and communities, and become inspired to take meaningful action.

We aim to offer readers a comprehensive, international perspective on critical events shaping our world – from the Black Lives Matter movement, to the new American administration, Brexit, and the world’s slow emergence from a global pandemic. We are committed to upholding our reputation for urgent, powerful reporting on the climate emergency, and made the decision to reject advertising from fossil fuel companies, divest from the oil and gas industries, and set a course to achieve net zero emissions by 2030.

Aisha Gani in London
@aishagani

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top