In recent months, it feels that artificial intelligence (AI) is everywhere. Since ChatGPT was publicly launched at the end of 2022, companies have made use of the “AI hype” to market many products to the general public, ranging from uses in commerce, education, health care, and many other areas of life.
However, AI presents many problems, including those of discrimination, surveillance, abuse, and more. These issues are exacerbated when AI technologies are used in policing or warfare. In the first episode of the second season of our podcast “Think and Resist: Conversations about Feminism & Peace,” we invited three guests—Matt Mahmoudi, from Amnesty International, Shimona Mohan, from the United Nations Disarmamnent Research (UNIDIR), and Jennifer Menninger, from WILPF Germany—to discuss the inherent problems of this technology, the consequences for marginalised communities, and what we can do to put an end to it.
AI as means to violence
For several years, AI has been used as means of societal control, mass surveillance, and discrimination. There is ample research about the harms caused by predictive policing tools, border control technologies to monitor the movement of migrants and refugees, and facial recognition being used to conduct abusive practices, including to supress protests.
In general, AI systems are trained on massive amounts of private and public data, which are often obtained through mass surveillance. For instance, Israel has been surveilling the Palestinian population for decades, collecting data on where they live, who they know, and how they socialise. All this data was then used to cause horrific death and destruction during Israel’s genocidal war against Palestinians following the October 2023 attacks. As highlighted by investigative reports, the Israeli military used systems like “Gospel,” “Lavender,” and “Where’s Daddy?,” all of which rely on AI to select targets to be bombed. By scoring individuals according to the threat that they would allegedly pose, these systems were instrumental in the killing of thousands of Palestinians and the immense destruction of infrastructure in Gaza. For a full explanation of how these systems work, check out this excellent resource produced by Visualizing Palestine.
Thus, AI can be used to kill—and is already being used to do so. More and more countries have been investing in the development of advanced technologies that can enhance their capabilities to wage war—and companies have been making huge profits by providing them. As part of the quest for high-tech violence, some governments are actively developing autonomous weapons systems (AWS), also known as “killer robots”.
Killer robots
Autonomous weapons are systems that select and apply force to a target without human intervention. That is, after the human operator deploys the machine, it is able to autonomously identify, track, and attack a target (either human or object), using algorithms and sensors with which it has been programmed.
There are several examples in popular culture of this type of technology, “The Terminator” being one of the most famous ones. But these systems no longer belong to the realm of science fiction—in fact, there are several companies currently developing the technologies needed for such machines, and a handful of governments have argued that such weapons will be “beneficial” to warfighting, policing, and/or border control.
Many problems would stem from AWS, including that they will lower the threshold for war by offering the deploying force a way to limit exposure of its soldiers to harm, making war seem less politically costly. Autonomous weapons will also boost a destabilising arms race and will contribute to digital dehumanisation, as they reduce people to patterns of data. For example, for a machine, a person is just data such as physical appearance, where it lives, social and family ties, etc. This dehumanisation has profound impacts for communities being targeted by these weapons, in addition to reinforcing prejudices and biases present in society.
Gendered harm
In the podcast episode “The weaponisation of artificial intelligence (AI),” we talked about how those biases are present in AWS and other AI-enabled technologies. Our guest Shimona Mohan, from the United Nations Institute for Disarmament Research (UNIDIR), highlighted research showing that facial recognition systems recognise white male faces more accurately than darker female faces. This is largely due to the kind of data that goes into these systems and how it is processed. She warned that this has several implications, particularly in the military context, as it may result in misidentifying women as non-human objects or miscategorising civilian men as combatants.
The problem, however, goes beyond the inaccuracy of these systems. Matt Mahmoudi, from Amnesty International, highlighted that it’s not just about diversifying datasets, but also about questioning the biases that undergird policing and warfare infrastructures that use these technologies in the first place. He explained that even if you have a system that’s 99.9 per cent accurate, you might have a police force that asymmetrically targets historically marginalised communities such as Black and Brown communities. By deploying these tools against the communities that are already at risk, cops can engage in even more compounding human rights violations.
That is why a feminist intersectional approach is useful when looking at AWS and related technologies, as it places these systems in a broader context of violence and oppression. Jennnifer Menninger from WILPF Germany explored this in a recent paper she co-authored about Feminist Foreign Policy and AWS. According to Jennifer, autonomous weapons are being developed to reproduce even more harm, which would not be in line with the goals of any Feminist Foreign Policy. She argued that feminist foreign policy should be about transforming security policy by applying feminist principles, including by dismantling power structures, decolonising development policies, and working for peace and justice.
Organising against tools of oppression
Are you feeling a growing sense of impending doom as you learn about how AI has been used? Do not despair, as there is a lot that can be done. In our podcast episode, we also discuss current initiatives to tackle the harms caused by AWS and AI-enabled technologies. We particularly explore the current efforts to negotiate a treaty in the United Nations banning autonomous weapons, led by the campaign Stop Killer Robots, of which WILPF is part. Shimona also shares about UNIDIR’s work in capacity building and pushing for meaningful representation of women diplomats and experts at all stages of policymaking concerning LAWS. Matt comments about Amnesty’s initiative Ban the Scan, which analyses the surveillance apparatus in New York, Hyderabad, and the Palestinian Occupied Territories, among other work being carried out by Amnesty Tech. WILPF has also been raising awareness about the dangers of AI-related technologies from a feminist, anti-militarist, and anti-capitalist perspective, among many other activities.
To find out more about what has been done to address AI being used in human right violations and how to engage in preventing this from happening, listen to our episode and explore the resources made available there. In the following episodes of Think and Resist, you can also find out more about other timely and important topics, such as military spending, the connections between the fossil fuel and nuclear industries, land grabbing, and more! We hope this new season is useful for activists and organisers everywhere working towards a more peaceful and just world.