By Ray Acheson

Leer este artículo en español.

As COVID-19 continues its spread around the world and as the death tolls mount in many countries, some governments are looking to technology to help flatten the curve of infection, or to help distribute goods or perform other tasks that might currently endanger human life. This could be seen as positive; who doesn’t want to reduce the potential of exposure? But as reliance on potentially invasive technologies expands, we need to pay attention to what measures are being put in place “for our benefit” now, which technologies are being developed and deployed and what other functions they have served or might serve, and what happens to them after this crisis is over.

We also need to think about the use of these technologies, and justifications for them, in the context of militarism and political economy in order to fully understand who will benefit and who will suffer. We need to consider the potential for weaponisation of some of this technology, or how it can be used to repress and oppress populations. And we need to think about what actions we can take now to prevent the dystopian future that otherwise awaits us.

Tech to the rescue?

Several organisations, including MediaNama, Privacy International, and Project Ploughshares have been documenting and mapping the spread of intrusive technologies and surveillance during the COVID-19 outbreak. Some governments, including China, Ecuador, Germany, India, Israel, Italy, Poland, South Korea, Taiwan, and Thailand, are using GPS via mobile phones to track people to enforce quarantines or to engage in “digital contact tracing” to learn where they might have spread the virus. Israel is doing this retroactively, using historical data to track people’s movements over the past weeks. South Korea has combined its monitoring of mobile phone data with surveillance footage and credit card purchases to track the path of infection. In Hong Kong, as of 19 March everyone arriving is given an electronic wristband to ensure they comply with the mandatory two-week quarantine. Some governments are working with corporations to develop new applications like TraceTogether in Singapore, Home Quarantine in Poland, and Oxford University’s app for the National Health Service in the United Kingdom.

Other governments are using drones for a variety of functions. China is using them to spray disinfectant on public areas and to transport medical equipment and quarantine supplies. Other countries from Honduras to Spain have started using drones for disinfecting public spaces. Robots are also being deployed in China to deliver medical supplies within hospitals and to disinfect rooms. They have also been used to fill orders in warehouses and provide “contactless delivery”. Going much further, Belgium, China, France, India, Israel, Italy, Jordan, Kuwait, Spain, the United Arab Emirates, the United Kingdom, and others have deployed drones to restrict citizens’ movements. Some police departments in the United States are considering to do the same. The US company Draganfly has developed “pandemic drones” to detect and monitor people by using sensors to remotely determine temperature, coughing, sneezing, and heart and respiratory rates. Reportedly, these drones will be deployed in Australia at some point. Saudi Arabia has already deployed drones to measure people’s body temperatures.

But what are the risks?

While the deployment of surveillance technology, drones, and robots may be seen in this crisis as necessary or helpful, we need to ask if it actually is. As Edward Snowden has noted, in some countries we are being told that our choice is between mass surveillance or the completely uncontrolled spread of the virus. But, he argues, this is a false choice. There are alternatives that would be voluntary and limited, we just need the opportunity to develop those alternatives rather than being told the only way to save lives is to accept violations of our rights and constant monitoring by tech firms and governments.

We also have to ask, what will happen after the crisis is over? Some of the mechanisms currently being used already violate laws around privacy of personal health information; around surveillance, monitoring, and tracking of civilians; and around privacy laws for data collection and sharing between the user, the company, and the government. Even if people are willing to overlook this during the COVID-19 outbreak in the interest of protecting public health, there is a risk that information transferred during the crisis may be retained and used in other ways later on.

There is also a risk that the expectations around sharing this kind of information may shift as a result of the crisis, and our ability or interest to resist corporate or governmental infringes upon our privacy may be reduced over time. Over 100 civil society organisations signed a joint statement warning against these risks, noting that once state surveillance apparatus is expanded, it’s very difficult to dismantle it. Just like armed soldiers in US train stations became normal after 9/11, once surveillance and militarised violence are introduced into our daily lives, they become a normal part of our landscape.

The rise of the digital panopticon

Long before COVID-19 broke out, fights around surveillance, from the use of facial recognition software to predictive policing algorithms, were already unfurling. Several cities around the world have banned facial recognition, understanding that this technology violates privacy laws, is technologically flawed, and is programmed with racial, gender, and other forms of human bias. But in other locales, the use of facial recognition and artificial intelligence is becoming widespread to monitor and control populations. In Xinjiang, China, Uyghurs live under mass surveillance. Marseille, France has been experimenting with a combination of artificial intelligence, citizen data, and government surveillance networks to conduct predictive policing and to aid in police investigations.

The proliferation of drones around the world was also leading to their use domestically by police. With the increased deployment of drones during the COVID-19 crisis, there is a risk that this may become normalised. The police may use drones to continue to surveil populations or to disperse crowds later on, after this crisis is over, in violation of our right to assembly.

In the interests of preventing large gatherings of people, some governments have banned marches, demonstrations, and protests. Under international human rights law, given the existing circumstances and the scientific advice on social distancing, these actions are justifiable, provided they are proportionate and temporary. But UN human rights experts have urged states “to avoid overreach of security measures in their response to the coronavirus outbreak and reminded them that emergency powers should not be used to quash dissent.” Right now, drones are being used to enforce these measures. What happens to these ordnances and declarations once the crisis is over? And what happens to the tech being deployed now to enforce them?

Many cities, parks, and other public spaces have, over the years, passed no fly zone legislation in relation to drones. These may be relaxed or even tossed out in the aftermath of the crisis as the case is made that the use of drones for public surveillance is “useful”. Even in the midst of the crisis, the city of Baltimore in the United States approved the use of “surveillance planes” to conduct persistent monitoring of the city under the guise of aiding investigations of violent crimes—not directly related to stopping the spread of COVID-19, but undoubtedly using the moment of citywide distraction to pass this controversial legislation.

Weaponising these technologies

Beyond the immediate risks of repression of rights and normalisation of the digital panopticon, there is also the danger of these technologies being weaponised.

If surveillance drones become normalised in our societies, what is to say that deployment of armed drones by police and government forces will not soon be on the table? Armed drones are already deployed by many of our governments around the world, used to kill extrajudicially and with impunity. Their surveillance packages are used to create target profiles for human beings, which are then acted upon by armed drones. This has resulted in incredibly high numbers of civilian deaths and has violated international humanitarian and human rights law in terms of targeting, use of force, and protection of civilians.

Robots are on the verge of becoming weaponised. And these robots may be programmed with surveillance packages—tracking movements through data collection and analysis; using facial recognition sotware to build target profiles, etc. The Campaign to Stop Killer Robots has been urging for several years that governments work to prevent the development of autonomous weapon systems before it is too late.

Already, more than 380 partly autonomous weapon systems have been deployed or are being developed in at least 12 countries, including China, France, Israel, South Korea, Russia, the United Kingdom, and the United States. Major tech companies like Microsoft and Google have been commissioned to create or adapt technologies for military use, such as Microsoft’s HoloLens headset that soldiers are using “to detect, decide and engage before the enemy” or its JEDI cloud computing contract with the Pentagon that will involve developing and running applications that are part of the drone kill chain.

If this trend continues unconstrained, humans will eventually be cut out of crucial decision-making over the use of force and over the control of weapon systems. There is grave risk that the tools used now for mass surveillance, controlling populations, or dispersing goods now may be weaponised in the future. We may be tracked and monitored during the COVID-19 outbreak, but these same technologies may be used to kill us later.

The risks of autonomous weapons

Many roboticists, scientists, tech workers, philosophers, ethicists, legal scholars, human-rights defenders, and peace-and-disarmament activists have raised concerns that autonomous weapons will be unable to comply with international humanitarian law or human rights law, and that they will make war or the use of force more likely, encourage an arms race, destabilise international relations, and have moral consequences such as undermining human dignity.

Algorithms would create a perfect killing machine, stripped of the empathy, conscience, or emotion that might hold a human soldier back. Robots programmed to kill might also accidentally kill civilians by misinterpreting data. They would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. They would also be susceptible to cyberattacks and hacking. And they would be unaccountable for their mistakes or misuse.

Autonomous weapons will be weapons of power and inequity: countries of the global south may not be the ones to develop and use killer robots, but they will likely become the battlegrounds for the testing and deployment of these weapons. It will be the rich countries using these weapons against the poor—and the rich within countries using it against their own poor, through policing and internal oppression.

The construct of target profiles for people will reduce human beings to ones and zeroes—targeted for their sex or gender identity, their race or ethnicity, their religious clothing, their disability, or other physical markers that the machine determines as “correct” for killing. This will exacerbate racial profiling in policing, border surveillance, and other activity; it will result in gender- and race-based violence.

The political economy of repression and violence

Autonomous weapon systems are part of a continuum of militarism and capitalism. Investments in new technologies of violence and surveillance —in particular at a time when it is startling clear that ensuring real security and safety lies with investments in health care and in guaranteed incomes and housing—are part of the capitalist death cult. Governments and corporations are investing money in developing more “efficient” killing machines and surveillance technologies, while they actively disinvest from infrastructure that supports social well-being.

Our societies cannot and should not afford any more spending on the military and militarised technology. Instead of looking for new ways for the military to grow and develop, we need to urgently demilitarise and redirect that spending.

We also need to challenge the capitalist mindset driving both neoliberal economic policies and militarism. Both rely on ideas of “efficiency” and of “non-interference”. That is, neoliberalism purports that states should not “interfere” in the economy (unless it is to bailout big corporations, of course) because humans will make the most rational choices; while the militarism behind autonomous weapons purports that humans should be removed from decision-making and executions of violence because artificial intelligence or algorithms are capable of making more rational choices.

The pursuit of autonomous weapons also mirrors the capitalist pursuit of profit, and the inequalities this system of perpetual economic “growth” creates. Who has access and means to produce autonomous weapons? Which countries, and which people within those countries? Who makes the decision to develop these weapons (i.e. political and military elite), and who is forced to become complicit in creating and using technologies of oppression and violence (i.e. tech workers and soldiers)? Whose labour will be used for the extraction of the minerals and resources needed to build these technologies? Who will be on the receiving end of the violence and oppression perpetrated through the use of autonomous weapons? What political projects will be enacted through their use—i.e. suppressing protest and dissent, murdering dissidents, targeting entire “categories” of human beings based on profiling particular characteristics and identities?

These are critical questions not just for autonomous weapons, of course, but for all responses to the coronavirus and for the development and deployment of surveillance technologies and weapon systems. In particular, the reflexive turn to militarism as the most appropriate or even the only solution to crises must end. We must end it.

Investing in well-being

Banning autonomous weapons is a good step to curbing the potential harm of some of the technologies discussed here. But as former Google employee Amr Gaber says, we need to do more to make sure technology remains in the service of human rights, not repression and violence.

We need to stop the spread of the digital panopticon through surveillance, facial recognition, the use of algorithms for policing and detention, etc. We need to maintain or develop prohibitions on the deployment and use of drones for surveillance and violence. We need to prohibit autonomous weapon systems and prevent other technologies that will seek to automate violence. And we need to divert our resources—human, material, and economic—away from militarism and towards the things that actually keep us safe and well, together.

When he left office, former UN High Commissioner for Human Rights Zeid Ra’ad Al Hussein lambasted the “feckless politicians” for being “too weak to privilege human lives, human dignity, tolerance—and ultimately, global security—over the price of hydrocarbons and the signing of defence contracts.” In many countries, we are in this moment continuing to witness “those dangerous or useless politicians … threaten humanity.” But we have strength in our collective action. Courage and defiance are where change lies. And our actions now to preserve human rights and human dignity and to prevent the further abstraction of violence and war will be imperative to our survival later.