Celebrating Feminists’ Voices, Inspiring Global Peace

News
#MilitarisedMasculinities #FeministPeace #OnlineViolence

Hate Speech and Online Violence: How the internet fuels killings conflict and polarisation

This is the 10th episode of the Mobilising Men for Feminist Peace podcast. In this episode, we explore how the internet exacerbates societal polarisation and online violence – from the rise of clickbait and conspiracy theories to the amplification of hate speech.

Image credit: WILPF
WILPF International Secretariat
24 December 2024

This is the 10th episode of the Mobilising Men for Feminist Peace podcast. In this episode, we explore how the internet exacerbates societal polarisation and online violence – from the rise of clickbait and conspiracy theories to the amplification of hate speech.

“The strategy is to stop believing that tech companies can regulate or will regulate themselves. They will not. So what we’re fighting for is for robust regulations at the international level.” Rasha Abdul Rahim, People Vs Big Tech. 

About the Episode

What started out as a space for connecting people, sharing experiences, and building online communities has now turned into a breeding ground for disinformation, division, and, in some cases, real-world harm.  

In this episode, we explore how the internet exacerbates societal polarisation and online violence – from the rise of clickbait and conspiracy theories to the amplification of hate speech.

Guests: 

⭐ Rasha Abdul Rahim, Interim Executive Director of People Vs Big Tech and former Director of Amnesty Tech

⭐ Odanga Madung, co-founder of Odipo Dev, a data Consultancy in Nairobi.

Episode Script

Reem Abbas: [00:00:00] It’s no secret that social media and online platforms have revolutionised how we connect with one another. They have enabled instant global connections, amplified diverse voices, and fostered the exchange of ideas, cultures, and support networks around the world. They have also given rise to a darker side.

Hate speech and online violence are poisoning public discourse, dividing communities, and in some cases, inciting real world violence. Even worse, internet companies are not only failing to control the epidemic of hate, they are, in some cases, actually fueling it. 

Welcome to the Mobilising Men for Feminist Peace podcast, where we journey into the intersection of masculinities, violence, and feminist solutions.

I am Reem Abbas, Communications Coordinator of the Mobilising Men for Feminist Peace program at the Women’s International League for Peace and Freedom, the oldest women’s peace [00:01:00] organisation. 

In recent years, the internet has become a breeding ground for disinformation. What was once the domain of a few rogue actors have now become a multi-billion dollar industry – with clickbait headlines, conspiracy theories, and fake news flooding our feeds.

The rise of disinformation isn’t just a matter of false facts, it’s a complex ecosystem where bad actors exploit algorithms and weaponise division for political or financial gain. The consequences of this new digital age are profound. And in this episode, we explore how this feeds violence, both online and in the real world.

To help us understand why tackling the spread of disinformation has never been more urgent, we are joined by Rasha Abdul Rahim, Interim Executive Director of People vs. Big Tech, and former Director of Amnesty Tech, [00:02:00] and Odanga Madung, Co founder of Odipo Dev, a data consultancy in Nairobi. 

Firstly, I asked Odanga to explain what disinformation is.

Odanga Madung: It’s purely the digital subset of propaganda. So information that is created with the purposeful intent to mislead, right? And most likely tends to come from powerful institutions. So there’s also a power element in terms of how dinisformation is conveyed towards people. It could be a government. It could be a corporation. It could be very, it could be different types. But I think that’s probably the best descripton that I can give on it. I mean, there’s usually a lot of back and forth with regards to what exactly it might be defined, but I typically tend to look at it from a political propaganda point of view and in particular, I also look at it as an exercise of power.

Dean Peacock: Odanga staying with you for a moment, you’ve written about the ways in which political [00:03:00] parties, uh, pay operatives to spread disinformation on the internet. Can you tell us a little bit more about what you have uncovered in that area? 

Odanga Madung: So the long and short of it is that, um, politicians have always had a very strong incentive, um, to be interested in information environments, and because they have a strong interest in information environments, they’ve also been very interested in the parts of information environments in which they can control.

And the internet has provided them with very different avenues in which they can be able to pull that lever. Uh, now my investigations looking into this particularly, um, looked into what we now call Kenya’s disinformation for hire industry, where we found out that because platforms have certain aspects of their infrastructure that are very vulnerable to manipulation, that there are several people, particularly what we like to call election vendors, [00:04:00] people who are adjacent to politics and typically provide services to politicians during elections to enable them to win.

You know, they developed an industry that was particularly geared towards serving politicians in order to manipulate lots of these features, such as excess trending algorithm, for example, or creating groups on platforms such as Facebook. Um, these constituents would go ahead and pay for services that would be particularly used to attack dissidents and to try and manufacture public opinion of one kind or another.

Um, we engaged with tech platforms for almost two years trying to get them to change certain elements of their infrastructure to try and reduce these kinds of things because we did realise that simply performing takedowns on accounts is playing a game of whack a mole, um, and in particular, a big target of mine was X.

We had started making very significant [00:05:00] aspects of progress and then Elon Musk bought Twitter and that whole thing went into the trash. And so I think, you know, that also speaks to the kinds of, you know, struggles in terms of tech accountability that many of us end up going through because, you know, the window of influence of influence that people like us would have from outside and the ability of these people, of many of these, um, Silicon Valley tech giants and even Chinese tech giants to listen to us – that capacity and capability has drastically reduced. And so I think for me, that’s the biggest take out that I’ve gotten from my engagement with the disinformation for hire industry over the past few years. 

Dean Peacock: A question for both of you that we’ll return to throughout is this question of gender norms, masculinities, and what’s influencing men’s use of violence, whether that’s in situations of armed conflict, um, against their intimate partners, or in fact on the streets.

Um, and so I wanted to just get a preliminary sense from [00:06:00] both of you as to how this phenomenon that we’re talking about is gendered. 

Odanga Madung: One advantage that I’ve had across my career is that, you know, the duality of being able to cover some of the real on ground effects of what intersects with technology, like in this particular case, when we think about the rise of the manosphere, it never really occurred to me how dangerous the manosphere was until I started thinking about the other aspects of my career, where my team and I at Odipo, basically currently run the largest database of femicide cases across the African continent. And to be honest, in terms of how this is gendered, I think the best thing I can do is speak to the stories that we come across in our femicide database, because I think they really help put it to perspective why I have, in very particular terms, began to develop a very serious problem, or a very serious antagonistic attitude towards the manosphere. 

When you look [00:07:00] at a lot of the stories of how women get killed, particularly in Kenya, because those are the facts that I can speak to, a large part of it is mostly due to intimate partner violence. And when you look at the relationship between the people that, you know, between the ladies and the individuals and the men that end up killing them, it often either bounces out as a result of number one, rejection, or number two, economic inequalities, right? So you find that case of a husband murders his wife over a plate of beans, right? Or someone gets rejected, and then they go ahead and murder their girlfriend or something of that sort. And so when we begin to really interrogate what are the engines and drivers that would encourage men to act in this kind of way such that a simple conflict over perhaps a plate of beans or me being rejected by my girlfriend would make me decide that no, if anything, I should go ahead and kill this person, right? 

And in particular in Kenya, what we’ve [00:08:00] seen is that a very strong inclination towards strangulation. This is what the dangers of what people have come to call a culture of toxic masculinity, or, you know, just the dangers of what the patriarchal nature of our society is likely to be. Personally, for me, I’ve, I’ve really had to contend with the idea that for my partner, that, you know, getting into marriage is in essence, actually a very dangerous thing for a lot of Kenyan women. And so when you find that platforms have been able to sort of give free reign to the spread of a lot of these very, very dangerous ideologies, in particular, that women are, we should be considered to be no different from children and that men own women. And, you know, these things are essentially very radioactive ideas that can lead to some very dangerous consequences down the line, as we have seen and as I will actually be talking to in some new publications that will be coming out pretty [00:09:00] soon. 

Dean Peacock: Rasha, can I put that same question to you? Kind of, how do you think these issues of technology, online platforms are influencing some of the gender norms that are contributing to men’s violence. 

Rasha Abdul Rahim: Yeah, absolutely. I mean, building on what Odanga just said, which was hugely insightful, I think one of my biggest takeaways from, from what he just talked about was the fact that, you know, we can’t divorce the offline world from the online world. The online world is, is an extension of the offline world on steroids, if you like.

I think it’s less now, but there has been a huge misconception that technologies are neutral, and they’re not biased, and that’s why they’re great, and that’s why we need to have technology integrated in every single aspect of our lives. 

The truth is that technologies are deeply biased, and the reason why they’re deeply biased is simple. It’s because they are created by humans, and so the [00:10:00] bias that humans have are reproduced in that technology. So that’s one. The other issue is that the majority of technologies and social media platforms have been developed by primarily white men in Silicon Valley and indeed by a handful, a small handful of tech companies in Silicon Valley.

And that’s a huge issue because for several reasons, one is that they don’t take into consideration local contexts. We’ve seen this issue creep up in terms of content moderation around the world or lack thereof. There is no sort of diversity built into these technologies. And so again, they perpetuate a dominant worldview that is based on white supremacy, if you like, or on Western values.

And how this manifests is in several ways, and I’d just like to mention two. One is, when I was at Amnesty International, we did a lot of work looking at violence against [00:11:00] women online. And one of the studies we did was on what was then called Twitter, it’s now called X. But we did a very large scale study, uh, looking into abuse against women on Twitter, and we did it with a company called Element AI. It’s an AI software company. And we partnered with them to create a comprehensive overview of how online violence against women manifests, and who is targeted, who is sort of the most at risk of online violence against women. And we did this in a, in a, very sort of creative way. We had more than 5000 volunteers from 150 countries sign up to take part in what we called sort of troll patrol, which was a unique crowdsourcing project, and it was designed to allow us to process large scale data about online abuse. 

And so these volunteers actually went through nearly 290,000 tweets which were sent to, uh, 778 women politicians and [00:12:00] journalists in the UK and, uh, and the US. I should mention this was in 2017, by the way. Then we worked with element AI and we used data science and machine learning to extrapolate data about the scale of abuse that women face on Twitter. And we calculated 1. 1 million abusive or problematic tweets that were sent to these women across the year, or better said, one every 30 seconds on average. And what that study really showed us was that black women were disproportionately targeted. 

So if you’re a black woman, you were 84 percent more likely than a white woman to be mentioned in an abusive or problematic tweets. Now that’s 1 in 10 tweets mentioning black women that were abusive or problematic compared to 1 in 15 for white women. The other very sort of staggering data that we uncovered was that 7. 1 percent of tweets that were sent to women in the study were problematic and abusive. Uh, women of color, so black women, [00:13:00] Asian, Latino, mixed race, were 34 percent more likely to be mentioned in tweets than white women. And the other thing that our study showed was that online abuse against women cut across the political spectrum. 

So that really shows the, how the offline world sort of manifests into the online world and that really gave us an insight into how it’s usually women of colour, who are the ones that are disproportionately affected by online abuse.

And then the second way in which this manifests is in the development of discriminatory, artificial intelligence systems. So, one example is facial recognition technologies, which are incompatible with human rights for several reasons, apart from the biggest reason, which is that they’re a tool of mass surveillance, is that they are inherently discriminatory.

And several studies have shown, particularly in the U. S., that these technologies fail to detect darker skin tones and even more than that, they failed to [00:14:00] specifically detect women with darker skin tones. So the issue is deep seated and it permeates all technologies and social media platforms. 

Dean Peacock: Beyond bias, right? Because I think in some ways your framing is generous with regards to the Silicon Valley companies that are making money off of online platforms. Is there a way in which the internet itself and these online platforms are amplifying and driving some of that misogynistic behavior? 

Rasha Abdul Rahim: Yes, absolutely. And this is directly in the DNA of social media companies and particularly social media companies that rely on a surveillance for profit business model.

And what that means is that their, their money making methods involve spying on people’s fears, their likes, their dislikes, every single intimate [00:15:00] detail about them. To be able to create detailed profiles about them and then to sell those profiles to advertisers or anybody who wants to influence them.

And the reason why I agree with what you’re saying, totally agree that social media platforms tend to amplify and in some ways exacerbate this kind of online violence and incendiary content and hateful content, is because that’s the way they keep people glued on their screens for longer. And the reason why they want to keep people glued on their screens for longer is so that they can continue to collect as much data as possible and to continue to keep serving them ads, because that’s how they make money.

So it’s really down to the fundamental business model that is sort of the root cause of all of these harms that we’re seeing proliferate on social media. 

Reem Abbas: You really just described how, you know, the internet has become a hotbed for hate speech, cyberbullying, violence, and [00:16:00] how a lot of this violence and cyberbullying is gendered.

In the larger, of course, you know, framework, it impacts, uh, women’s productivity and also women’s political participation, you know, because they’re too scared, you know, about this cyberbullying and harassment to share their views online and, and, and take part in discussions and, and it’s definitely affecting a lot of online activism as well.

You know, Rasha, if I, if I want to just go deeper with you about what you just said, you talked about the business model of tech companies and how this business model is really just built about, built on, you know, invading the privacy of users and then using this information for profit. So you have been, you know, challenging through different platforms and through different roles, you have been challenging social media companies for how they manipulate algorithm and how they use algorithm, you know, and for also not regulating content, you know, that is leading [00:17:00] to violence, whether it’s political violence, gendered violence, ethnic violence, and more polarisation. 

In your opinion, what really needs to be regulated and what needs to happen to this very invasive business model for it to be, you know, more humane and more, you know, rights based basically?

Rasha Abdul Rahim: I would say three main things. Before I do that, I would like to preface that by saying that if we look back at the development of the internet and the great promise of the internet in the 90s as a, you know, truly connecting force and democratising force and, and a lot of the reasons why people enjoyed the benefits of that was, was through very lax regulation.

And that, that lack of regulation has, has continued on, uh, until today, um, with the exception of EU regulations, which, which we’ll touch upon later. But the internet was, was never really built if we’re talking about a big [00:18:00] tech company, a social media platforms that were built by big tech companies. They weren’t really ever built with, you know, maybe to begin with, to be very, very generous, they were built with connection in mind and connecting people, um, and, you know, increasing people’s access to information and, and, and to each other. Maybe that was the, the sort of initial promise, but I could, you know, we can see now that, that it’s, it’s profit that’s driving these social media platforms and their, their perpetuation.

And so what we really need is social media platforms and technologies that are truly for the public interest, that truly have people at the heart rather than profit at the heart. But in terms of what specifically needs to be regulated, People Vs Big Tech has three pillars of work that we, um, that we campaigned for.

Uh, first is around fixing our feeds. So, you know, big tech’s recommender systems and algorithms are toxic. And as we’ve discussed, they’re amplifying hate speech and disinformation. They also [00:19:00] weaponise every societal fault line in order to keep us engaged on their platforms for longer so that we’re served more ads and so that they can make more money.

And as I mentioned before, this isn’t just a, you know, a glitch in the system. This is actually the system. It’s deliberate. It’s a, it’s a deliberate design choice. And it violates our rights, and it’s an unethical model, which is designed to engage us, and enrage us, and, and addict us. And so, we need regulation to make these platforms safe by default, and we need to have, real control over what we see over the recommender systems that are that are used and over the way that, you know, we are able to access information. That means having recommender systems that are based on profiling switched off by default. Currently under EU legislation, that’s not the case. Social media companies have to provide one alternative to recommender systems based on profiling. But that [00:20:00] really involves, in real terms, people going into their settings and turning off the recommender system based on profiling and opting for another one. And people just won’t do that because it’s, it’s, it’s complex and, and, and many people don’t even know that that’s, that’s even an option now. 

The second thing we’re calling for is, is to stop the surveillance for profits business model. You know, Meta and Google really hold a duopoly over the entire online world and billions of users across the globe. And they surveil our personal interactions, they profile our vulnerabilities and then they create profiles about us and they sell those to advertisers or whoever wants to influence us. And so what we’re calling for is a ban on surveillance advertising and transparency on all aspects of, for example, online ads and rigorous enforcement of privacy rights. Now, again, the Digital Services Act does not ban surveillance advertising. 

And then the third thing we’re calling for is breaking open big tech. You know, as I mentioned, there are a [00:21:00] handful of tech companies that dominate social media and other technologies. That means that other rights respecting alternatives aren’t able to emerge because of the holds that they have over the information ecosystem, but also over the economic system. And so that means that they’re really dictating the direction of powerful new technologies, like AI, in ways that, again, prioritise their profits. And what we want to see is technological progress that serves people first and foremost, rather than the profits of a handful of global mega corporations. And so what we’re calling on governments to do is to enforce antitrust and competition regulations and policies and mechanisms and to enforce economic regulations that allow other non Google or Meta alternatives to, to emerge. And we want governments also to invest in, in technology that actually serves [00:22:00] public goals and puts people first. 

Now, you also mentioned, uh, content moderation, uh, Reem, which, which is also an important aspect, but without tackling the root causes, the three issues that I just mentioned, without tackling the root causes, content moderation is not going to be sufficient. But I would like to just talk briefly about the importance of content moderation. Also, the inequality of content moderation across the globe. And we’ve seen that this can have really painful and destructive consequences because tech companies, big tech companies offer different protections across the globe, depending on people’s power or nationality or language.

I just want to cite a 2021 investigation by The Verge, which uncovered that Facebook actually has a tier list, which gives countries different levels of importance. So Brazil, India, and the USA are the highest priority. And then countries like Germany and Indonesia, [00:23:00] and Israel were placed, um, in second highest priority, then there was a third tier, which is basically the rest of the world is in tier three, all sort of grouped together.

So, you know, that really sort of shows you the, the failure of platforms like specifically, uh, Meta to invest in safety systems in, in countries where, particularly where there are situations of, of, of crisis and, and, and conflict. And it shows that, or it suggests at least, that Meta will only act where there’s political pressure and will, you know, otherwise leave millions of users in the global South unprotected and exposed to harmful, extreme and hateful content.

Reem Abbas: You said a lot of interesting things and for me, the fact that, uh, many countries where serious, you know, conflict is happening are deprioritised and not enough investment is is is, you know, and they don’t and Meta and other companies don’t invest [00:24:00] enough in human resources, checkers, people who are able to track and see what are the videos and the content that is driving more violence. And this is a big issue. And this is at the heart of the of the current um, uh, case against Meta, you know, as you know, a group of individual petitioners, uh, represented by Mercy Mutemi, of Nzili and Sumbi Associates and supported by Foxglove, the tech justice nonprofit, they’re suing Meta for fueling Ethiopia’s ethnic violence. And the victims are arguing that its algorithms have amplified hatred, and this has led to mass killings and the assassination of activists. So in your opinion, I mean, if you could just kind of comment on this and really just let us know how is this going to contribute in any way to changing how Meta operates, especially in very vulnerable and conflict affected countries that you’re saying are continuing to be [00:25:00] deprioritised? 

Rasha Abdul Rahim: So the, the case that you, that you mentioned, Reem, is actually a case that, that Amnesty International investigated as well. Actually, one of the petitioners is an Amnesty International employee. And I guess one of, one of the things I’d like to start with is, is that the power of big tech companies is such that it’s really difficult to hold them to account, you know, even through strategic litigation and, you know, Meta is armed to the teeth with, um, the world’s best lawyers who are, in fact, arguing that, um, the Kenyan courts don’t have jurisdiction over this case.

Um, but that is also obviously in dispute at the moment, but just maybe to explain to to the listeners that the background of this, um, of this case is it’s one of, uh, a university professor um, in northern Ethiopia was hunted down and killed in November 2021, [00:26:00] um, weeks after, um, he’d received posts inciting hatred and violence against him, uh, on, on pace on, on Facebook and, uh, these were, um, flagged to Facebook content moderator several times, but, um, they failed to take action.

And his son now claims that, um, Facebook only responded to reports about the post eight days after, um, Professor Murug’s death. Um, and that was three weeks after his family had first alerted, um, Facebook. Um, and so that really shows the sort of, the, the, the really, um, tangible offline consequences of a failure to take action in relation to content moderation.

Now, the other thing to flag here is that, um, uh, the, the Facebook papers that, um, were, uh, sort of, uh, released by Francis [00:27:00] Haugen, who was an ex, uh, Facebook employee. Which really showed that Facebook has actually long known about its platforms problems, including the proliferation of hate speech and disinformation, which, you know, has led to devastating real life consequences, not only in Ethiopia, but also in India and Myanmar and Sri Lanka.

And the papers also shows how Facebook. Fails to effectively moderate content in much of the world. Um, and, uh, I think 11 documents, which was from 2020, um, suggested that the vast majority of his efforts against misinformation around 84 percent went towards the U. S. Um, with just 16 percent going to the rest of the world, um, lumped, lumped together, uh, and what’s more is that these revelations from the Facebook papers showed that, um, Facebook deliberately chooses to [00:28:00] maximize engagement over, over, uh, user safety.

So, it’s not that Meta doesn’t know what’s happening, but that, uh, in some instances, this It’s a deliberate choice to continue to operate in a way that benefits the surveillance for profit business model. And now back to this, this case in specific and what this case seeks to, um, seeks to achieve. Um, so it’s, it’s going to deal with, uh, substantive questions, um, relating to what extent, if any, uh, META is accountable for the human rights violations caused as a result of the content promoted on, on its platform on Facebook.

And the petition also asks the court to order Facebook to make substantial changes to its operations. Um, so for example, to stop the virality of, of hate, um, to start demoting Violent Inc. And incitement, similar [00:29:00] to, to take similar sets as it did, for example, um, after the US Capitol riots in, in January 20, in 2021.

to employ enough content moderators and staff, um, with the right languages, um, to be able to, to look at, at the content. Um, and crucially also calls for a restitution fund to be established for, uh, victims of the hate and violence incited on, on, on Facebook. Um, I think they’re seeking around, uh, uh, I need to check my figures.

Sorry. They’re seeking around. 1. 6 billion in a victim’s fund. And, you know, it shouldn’t take a lengthy and expensive legal action to hold the case. Facebook to account for, for its role in, in, um, spreading, [00:30:00] um, violence and incitement to violent speech in, in, in Kenya and other countries.

Dean Peacock: How pervasive a problem is this? Is this a particular situation that occurred in Ethiopia or is this a pervasive problem that is in significant ways fueling violence and polarisation. 

Odanga Madung: The Ethiopian situation is quite an interesting one because I tend to think about it as a typical problem of what I call, um, when hate gets mainstreamed.

If you think about it as a political problem, there’s many ways in which platforms are able to make some certain types of problems worse. And I think, you know, in the case of genocides, in cases like Ethiopia, Myanmar, and, you know, to some extent other countries such as even DRC come into mind as cases where such kinds of problems tend to occur.

I would like to think about it in terms of a one to one mapping. If you compare it with even aspects such as, again, [00:31:00] sorry to come back to the monosphere, but I feel like given that I’m just, I’m just recently from writing about it, it’s such a clear example to me – is that there are certain types of rhetoric that societies tend to move away from because they know how dangerous they can be, but time and time again, we found that such kinds of rhetoric have managed to find a home on various technology platforms and purely because a lot of these companies have managed to scale and become so large that they have no understanding of the context in which of the jurisdictions that they operate. They have been completely unable to carry out content moderation towards dealing with these different kinds of and different natures of hate speech within these countries.

It’s a problem that’s technically usually is called context bias. And at least in my own capacity, I’ve written about it severally, where especially when looking into elections, in the lead up to elections in various African countries. Typically, especially if a country has a history of conflict, there are some forms of speech and some forms of rhetoric that [00:32:00] that society in itself will tend to shy away from or that society in itself will tend to try to put down as much as possible, right? 

And in the case whereby that conflict is new, you know, there are very many people that will recognise the relationship between certain types of rhetoric and certain types of conversations and, um, activities, not just online, but within the society itself, and we’ll try to raise the flag that this type of content and these types of conversation are very likely.

To be strongly associated with entire communities of people getting wiped out. Again, you don’t have to think far. You can even think about things like the Rwandan genocide where, you know, certain sections of the population are being referred to as cockroaches. 

So, I mean, it is a platform’s job to be able to recognise when there are certain parts of the world or certain parts of their ecosystems that were, um, either political temperatures are rising either due to elections of different kinds of political crisis, and it’s up to them to [00:33:00] try and act as fast as they possibly can to try and address these kinds of issues.

Now, if they don’t, you know, content, I mean, there’s a very strong movement to have tried to try and make sure that they are held very specifically liable for the kind of content moderation decisions that they make. And you know, the Ethiopia case just happens to be one of them, but you know, I think that they’ve gotten away with a lot more. Um, so I think that’s what I’d have to say about that. 

Dean Peacock: I know you’ve done some writing on, um, the inadequacy of content moderation out of Nairobi, serving as one of the hubs in East Africa for content moderation.

Can you just fill the picture out a little bit about the nature of the problem there? 

Odanga Madung: I mean, typically what Facebook and Meta have gotten into trouble for is mistreating workers. You have cases where workers have gotten PTSD after working on Facebook’s, on training Facebook’s content moderation systems.

But, you know, the much, much broader question of what exactly were the content moderators doing? Um, within [00:34:00] these jurisdictions, how much of Facebook’s and Instagram’s and other Meta’s, other parts of Meta’s platforms, where they actually intersected with actually questions that we still do not know. 

Yes, we are aware that many of them were serving many different languages across the continent, but it also became very clear um, that they were not staffing them enough. If I am not wrong, I think the people that are moderating Amharic content in the, in, with regards to the Ethiopia conflict, um, were mostly, I think were not more than 10 people, right. And you could fact check me on that, but it was very clear that they were like very many languages being staffed with very little people.

Now, of course, this raises the questions of, and this is a very, very big challenge for many of us when we think about this topic, which is, you know, how much moderation is enough moderation, right? Because I think also, as we think about this movement and a lot of the movements that are focused [00:35:00] around the idea of tech accountability, we also need to understand the point at which we get to actually solving the problem.

And so some of the challenges I think that many of us have been trying to ask is, you know, how much moderation is enough moderation? How many moderators need to be recommended for a specific type of language to have effective content moderation? Secondly, what are the right kinds of design codes that need to be implemented within technology ecosystems to enable the You know, what a definition of healthy interactions online would necessarily need to look like.

Dean Peacock: So can we turn to then that question of strategy? How are you and other civil society activists attempting to achieve the very clearly articulated goals that you have, um, for how big tech should do things differently? What are the strategies Rasha and Odanga you are [00:36:00] using to advance change? 

Rasha Abdul Rahim: Sure. So, um, the first strategy is to stop believing that tech companies can regulate or will regulate themselves. They will not. We’ve known that for years now and there is no incentive for them to, to do that.

So what we’re fighting for is for robust regulations at the international level. We’ve started at the EU with the Digital Services Act and the Digital Markets Act to reign in the, the power of big tech companies and to exact on them sanctions if you like for, for non-compliance. 

With the regulations, that’s one strategic litigation is also a very, very powerful tool but as I said before, it’s lengthy, it’s very, very expensive, there’s a huge power dynamic power asymmetry here for individuals who would want to seek to exercise legal action, if you compare [00:37:00] the sort of might and the financial resources and the the lawyers that big tech companies have at their disposal. I do firmly believe in people power and we need to keep challenging tech companies in the courts and also, you know, using the courts as enforcement of existing regulations that also helps us identify where the gaps are and where to advocate for more regulation. 

The other thing that’s important is allowing for a more diverse, digital economy to, to emerge so that that gets back to the, um, breaking up or breaking open rather big tech so that we can level the playing field. And in order for us to create the conditions for a new digital economy regulators need to target the structural power of these tech giants so that [00:38:00] alternatives can emerge and can scale. So that means breaking up dominant tech firms through stronger enforcement of competition and antitrust laws and regulations, and in some cases, enforcing structural separations to prevent the further consolidation of these powers, for example, by blocking more mergers and acquisitions. 

We need to insist that dominant tech firms have interoperability features, which enable users to freely choose and move between different platforms and services. And that also will allow new entrants into the market.

We also need to look at to tax dominant tech firms and to redistribute the enormous profits that they currently exact on us. It also means stimulating a new and fair digital economy. So, for example, committing significant investment towards public digital infrastructure based on, for example, free and open source software and digital [00:39:00] commons. And using public procurement as a market lever to encourage, for example, the adoption and scaling of open and interoperable alternatives to big tech incumbents, which I realise as I’m saying all of these things is no small task, right? It will take years and effort, but it’s absolutely necessary to tackle the structural power that lies at the heart of the issue.

And people power is the final thing I would say. We need to come together as citizens, as concerned individuals, you know, people who rely on these platforms for connection, for work, for business, for whatever it may be. And we need to demand that these technologies work for us and have our interests at heart rather than the interests of the profits of tech companies. And that means building a broad based movement to work together to that aim. And that’s [00:40:00] the wonderful thing about the People vs Big Tech movement is that it came into existence, uh, during the Digital Services Act negotiations, and it was an effort to bring people together, a diverse set of people together in order to ensure that robust regulation is is agreed and and enforced at the European level. 

And young people as well have a huge part to play in this. They grew up in the digital age. I’m old enough to be thankful that I did not grow up in the digital age and I didn’t have social media when I was, when I was at school. Children these days have grown up with, with social media platforms. And, you know, there’s been a huge toll on the mental health of children and young people. But, you know, it’s also opened up many, many opportunities, and they are the ones who are going to inherit the future of the online world. So they need to be able to insist that they are heard and [00:41:00] insist that they are able to shape what that digital future looks like. 

Dean Peacock: Great, thank you. I’m struck by just how clear you are in terms of what is needed.

Um, and I’ve heard you reference the, um, EU Digital Services Act and the Digital Marketing Act a number of times. Does that provide you with some sense of optimism that change is in fact possible and happening, albeit in very circumscribed spaces? 

Rasha Abdul Rahim: Well, as a member of civil society, we are very bad at celebrating any kind of regulation, but I would say that it’s, it’s certainly a, a promising, uh, first step in, in, in regulating big, big companies.

And I’d say that Europe is, is ahead of any other region when it comes to this regulation. So it is very, very welcome. And the Digital Services Act does contain many responsibilities and obligations for what are called very large online platforms and very large online search engines. [00:42:00] So those are platforms and search engines that reach an average of 45 million monthly active users within the EU – and obviously cover platforms like Facebook, Instagram, YouTube, TikTok. 

Under the Digital Services Act, when it comes to recommender systems, these platforms have to now assess any systemic risks of their products and services. So essentially their, how their platforms work and they need to put in place measures to mitigate against any negative effects that may arise from their products or services.

They also have to show more transparency. So for example, to disclose the main parameters of their recommender systems and provide users with at least one option of recommender systems that is not based on personal data profiling. They are no longer allowed to use dark patterns. So that’s, you know, manipulative design practices that tend to influence users behaviors. So, um, [00:43:00] yes, as always with any regulation, the devil is in the detail and the key is effective enforcement. And that’s what we are working towards now, um, is to ensure that the words on paper translate into action. 

Dean Peacock: Odanga, if you have a moment, can I just ask you what the Implications are for the region in which we are based Africa of the Digital Services Act.

What does it mean for regulation elsewhere in the world? 

Odanga Madung: There is a very problematic narrative that I keep hearing about Europe, the savior. When it comes to this whole digital regulation thing. Um, but unfortunately, if I’m forced to be pragmatic, there’s a part of me that tends to want to think that, you know, it’s perhaps the best leverage that we have because very many of us within our countries are also very wary about hearing about our governments deciding to regulate tech companies, right?

If you think [00:44:00] about it, Dean, we are stuck in this very, very awkward position where some of us recognise a need to regulate tech companies, right? But some of us also have a very strong recognition of what it means when we hear our governments want to regulate tech company. So a big conversation for us, I think is in terms of trying to understand, you know, how, how exactly do we introduce some of these ideas and how exactly do we introduce, um, some of these perspectives in a way that still is able to protect our fundamental freedoms and rights as Africans, and I particularly speak as a Kenyan, where we have recently seen some of our legislators taking up some of these talking very, very talking points from the DSA and all these other types of legislation, but they come up at very awkward times. Why is it that legislators in Kenya suddenly decide to start caring so much about disinformation and [00:45:00] misinformation at a time when people have chosen to revolt against overtaxation, against police brutality, against human rights abuses? What does that say about the kind of governments that, that we have within our jurisdictions? 

So I think for me, yes, DSA offers us some very interesting forms of leverage. I can say that as I am someone who has, you know, very much interacted with my own counterparts in Europe in terms of thinking about, you know, how is it possible to use the DSA to, you know, sort of cover for some of the deficits of how technology companies have led to some perverse outcomes within this country. But at the same time, you know, we’re, we’re stuck between a rock and a hard place and have to be very pragmatic and maybe make political concessions in terms of what we may want to do. I mean, I would love to get your own perspective on this Dean, but I’m sure you understand what I mean in terms of the very interesting dilemma that we are faced with, [00:46:00] um, with regards to the question of what to do about these things.

Now, one entryway, however, when I do think about leverage in terms of trying to fix this problem has been institutional work and perhaps beginning to think about these things from a perspective of consumer framings, because ideally, these tech giants are corporations and corporations deal with consumers. Yes, they do interact with citizens, but a large chunk of their goal and how they make money is by treating people as consumers. And so thinking about it from a consumer framing and from a consumer protection angle could introduce some avenues that can provide some forms of progress that sort of will not, or rather will minimally reduce infractions in terms of the political implications of trying to regulate some of these companies.

There are other questions that we would need to ask, of course, as Africans in terms of the kind of leverage we have in terms of [00:47:00] influencing um, design systems of these platforms, right? What rights do we have for them to listen to us? Because tech platforms, as they have shown, have capacity to bully countries that have a lot more influence, that have 10 times more influence than we do,you know, I think news, you’ve seen what Meta has, Meta and Google have been able to pull off in both in Canada and Australia, in terms of distribution of news on those platforms. And so, in essence, we are dealing with a situation where the West has created a monster. And, you know, the West itself is grappling, trying to grapple with it.

And also, we on our side are also trying to understand what, what can we necessarily do about it. But I think my final submission is that, you know, at the end of the day, I think we also need to understand that the reason we’re engaging in these conversations is because also social media is good. 

In countries like ours, it has, it has led to a much, much greater, greater opening of civic [00:48:00] education, of civic awakening, of democratic exchange, and therefore, I think it’s also very important for us to operate from thinking about the social media that we want. And trying to see how exactly do we bolster more of those positive elements that we have seen. And perhaps if we come to the table with the platforms from the perspective, from that perspective, they could also be more willing to listen to us. 

And so these are the kind of thing that we’re grappling with I think when it comes to thinking about how exactly we maneuver around some of the problems that we’re talking about.

Rasha Abdul Rahim: Sorry, if I could just jump in on that because, Odanga, I just loved every single thing that you just said, especially that the West has created a monster and they’re now trying to reckon with it. Um, and I just wanted to add that a lot of times that, you know, when I’ve worked on, on different regulations of different industries, it’s always the, the EU, I guess, or the West that define the rules and define the legislation and then it’s expected to have a trickle effect to the rest of the world.

So I think there’s a [00:49:00] fundamental issue there to do with, you know, a colonialist perspective to regulation, where voices of other regions are absent, largely absent from these debates and these spaces. They’re also largely absent in the development of these very technologies, um, as, as we’ve spoken about before.

So, and there is a really, really complex tension there, right, between why should the EU and the US and the West set the rules for the rest of the world. But then again, what you’re talking about, Odanga, is, you know, there’s also a real danger that in some other countries governments will use regulation to clamp down on freedom of expression, freedom of assembly, and other rights.

So it’s a really tricky space, and one of the things that I and others have been thinking about is whether, you know, we need a global level regulation, you know, that brings all of the, the countries around the table. Now, of course, there are many, many [00:50:00] complexities around that and many risks, huge risks, but it’s just, yeah, I just, it really struck me what you said. And I, I just wanted to say that I wholeheartedly agree. 

Odanga Madung: Yeah, I think there’s a very fundamental, um, Amer and Eurocentric way in which we look at how we try to solve a lot of these problems. I think there’s still a very, very long way to go in terms of understanding what tech regulation looks for. I will speak particularly to Kenya because also secondly, I must say Africa is not a country. So when I speak to my own perspective, you know, I do think that there’s a very long way to go in terms of understanding what these types of regulations might mean for us, what they would allow us to achieve, especially when we think about it away from the white gates. But in general, I think from my own reflections about this entire movement and a large part of how we think about this thing um, you know, [00:51:00] personally, the next time someone asks me, you know, what does the DSA mean for Africans? I think for me, I’m, I’m largely going to start trying to avoid these questions because I think for me, I’m very keen now on trying to make sure what do these things actually mean for us from an indigenous perspective and from our own sovereign perspective, how should we be thinking about these types of issues?

Because we’ve had cases where these things just end up getting copied, getting copied from one jurisdiction to another without necessarily thinking about it in terms of what it means for the people that end up actually being affected by these regulations. 

Reem Abbas: A big thank you to Rasha Abdul Rahim and Odanga Madung for sharing their expertise with us.

Today we learned, dangerously, the patriarchal nature of society has seeped into the online world, with internet platforms giving free reign for dangerous ideologies against women to spread. 

We heard how technologies are deeply biased and lack diversity, which amplifies [00:52:00] a dominant worldview based on Western values.

Platforms also fail to take into consideration local context when it comes to content moderation, which can have profound real world consequences. 

And social media platforms must take urgent action to adapt their recommender systems and algorithms which amplify hate speech and disinformation. 

T hank you for listening to another episode of the Mobilising Men for Feminist Peace podcast and the final episode of the first season.

You can find all the resources and bibliography for this episode on our website and in the show notes. If you would like to support the work of WILPF, consider reading our publications and following our social media channels @WILFP. 

I am Reem Abbas and you have been listening to the Mobilising Men for Feminist Peace podcast from WILPF.

See you next [00:53:00] year.

Share the post
WILPF International Secretariat

WILPF International Secretariat, with offices in Geneva and New York, liaises with the International Board and the National Sections and Groups for the implementation of WILPF International Programme, resolutions and policies as adopted by the International Congress. Under the direction of the Secretary-General, the Secretariat also provides support in areas of advocacy, communications, and financial operations.

Matt Mahmoudi

Matt Mahmoudi (he/him) is a lecturer, researcher, and organizer. He’s been leading the “Ban the Scan” campaign, Amnesty International’s research and advocacy efforts on banning facial recognition technologies and exposing their uses against racialized communities, from New York City to the occupied Palestinian territories.

Berit Aasen

Europe Alternate Regional Representative

Berit Aasen is a sociologist by training and has worked at the OsloMet Metropolitan University on Oslo. She has 40 years of experience in research and consultancy in development studies, including women, peace, and security, and in later years in asylum and refugee studies. Berit Aasen joined WILPF Norway five years ago. She is an alternate member of the National Board of WILPF Norway, and representing WILPF Norway in the UN Association of Norway, the Norwegian 1325 network and the Norwegian Women’s Lobby. Berit Aasen has been active in the WILPF European Liaison group and is committed to strengthening WILPF sections and membership both in Europe and relations across continents.

Your donation isn’t just a financial transaction; it’s a step toward a more compassionate and equitable world. With your support, we’re poised to achieve lasting change that echoes through generations. Thank you!

Thank you!

Melissa Torres

VICE-PRESIDENT

Prior to being elected Vice-President, Melissa Torres was the WILPF US International Board Member from 2015 to 2018. Melissa joined WILPF in 2011 when she was selected as a Delegate to the Commission on the Status of Women as part of the WILPF US’ Practicum in Advocacy Programme at the United Nations, which she later led. She holds a PhD in Social Work and is a professor and Global Health Scholar at Baylor College of Medicine and research lead at BCM Anti-Human Trafficking Program. Of Mexican descent and a native of the US/Mexico border, Melissa is mostly concerned with the protection of displaced Latinxs in the Americas. Her work includes training, research, and service provision with the American Red Cross, the National Human Trafficking Training and Technical Assistance Centre, and refugee resettlement programs in the U.S. Some of her goals as Vice-President are to highlight intersectionality and increase diversity by fostering inclusive spaces for mentorship and leadership. She also contributes to WILPF’s emerging work on the topic of displacement and migration.

Jamila Afghani

VICE-PRESIDENT

Jamila Afghani is the President of WILPF Afghanistan which she started in 2015. She is also an active member and founder of several organisations including the Noor Educational and Capacity Development Organisation (NECDO). Elected in 2018 as South Asia Regional Representative to WILPF’s International Board, WILPF benefits from Jamila’s work experience in education, migration, gender, including gender-based violence and democratic governance in post-conflict and transitional countries.

A woman in a blue, black, and white dress smiles radiantly in front of a leafy green background.

Sylvie Jacqueline Ndongmo

PRESIDENT

Sylvie Jacqueline NDONGMO is a human rights and peace leader with over 27 years experience including ten within WILPF. She has a multi-disciplinary background with a track record of multiple socio-economic development projects implemented to improve policies, practices and peace-oriented actions. Sylvie is the founder of WILPF Cameroon and was the Section’s president until 2022. She co-coordinated the African Working Group before her election as Africa Representative to WILPF’s International Board in 2018. A teacher by profession and an African Union Trainer in peace support operations, Sylvie has extensive experience advocating for the political and social rights of women in Africa and worldwide.

WILPF Afghanistan

In response to the takeover of Afghanistan by the Taliban and its targeted attacks on civil society members, WILPF Afghanistan issued several statements calling on the international community to stand in solidarity with Afghan people and ensure that their rights be upheld, including access to aid. The Section also published 100 Untold Stories of War and Peace, a compilation of true stories that highlight the effects of war and militarisation on the region. 

IPB Congress Barcelona

WILPF Germany (+Young WILPF network), WILPF Spain and MENA Regional Representative

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris facilisis luctus rhoncus. Praesent eget tellus sit amet enim consectetur condimentum et vel ante. Nulla facilisi. Suspendisse et nunc sem. Vivamus ullamcorper vestibulum neque, a interdum nisl accumsan ac. Cras ut condimentum turpis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Curabitur efficitur gravida ipsum, quis ultricies erat iaculis pellentesque. Nulla congue iaculis feugiat. Suspendisse euismod congue ultricies. Sed blandit neque in libero ultricies aliquam. Donec euismod eget diam vitae vehicula. Fusce hendrerit purus leo. Aenean malesuada, ante eu aliquet mollis, diam erat suscipit eros, in.

Demilitarisation

WILPF uses feminist analysis to argue that militarisation is a counter-productive and ill-conceived response to establishing security in the world. The more society becomes militarised, the more violence and injustice are likely to grow locally and worldwide.

Sixteen states are believed to have supplied weapons to Afghanistan from 2001 to 2020 with the US supplying 74 % of weapons, followed by Russia. Much of this equipment was left behind by the US military and is being used to inflate Taliban’s arsenal. WILPF is calling for better oversight on arms movement, for compensating affected Afghan people and for an end to all militarised systems.

Militarised masculinity

Mobilising men and boys around feminist peace has been one way of deconstructing and redefining masculinities. WILPF shares a feminist analysis on the links between militarism, masculinities, peace and security. We explore opportunities for strengthening activists’ action to build equal partnerships among women and men for gender equality.

WILPF has been working on challenging the prevailing notion of masculinity based on men’s physical and social superiority to, and dominance of, women in Afghanistan. It recognizes that these notions are not representative of all Afghan men, contrary to the publicly prevailing notion.

Feminist peace​

In WILPF’s view, any process towards establishing peace that has not been partly designed by women remains deficient. Beyond bringing perspectives that encapsulate the views of half of the society and unlike the men only designed processes, women’s true and meaningful participation allows the situation to improve.

In Afghanistan, WILPF has been demanding that women occupy the front seats at the negotiating tables. The experience of the past 20 has shown that women’s presence produces more sustainable solutions when they are empowered and enabled to play a role.