Black Mirror’s “Hated in the Nation”: Facial recognition is a weapon

Sohee Kwon is a student from the CAIDE subject AI, Ethics and the Law. This an excerpt from her second assignment, a review of a media piece demonstrating AI Ethics principles.

What is the power of words? In Charlie Brooker’s British science fiction television series Black Mirror, words can summon swarms of robotic bees to kill you. “Hated in the Nation”, the sixth episode of Black Mirror’s third season, imagines a dystopian society wherein online hate produces deadly consequences, namely mass murder enabled by artificial intelligence (AI). There are a number of technologies problematised in this episode, which highlight the multiple current and potential issues of AI facial recognition.

“Hated in the Nation” begins with the police investigation of the sudden, unexpected, and gruesome death of Jo Powers, a journalist recently vilified by the British public for lambasting a disability rights activist’s self-immolation. Three detectives trace a seemingly unrelated string of deaths and learn that each victim was killed by an Autonomous Drone Insect (ADI) designed by the technology company Granular to address the collapse of bee colonies, with the ADIs pollinating plants. However, the detectives later discover that Garret Scholes, a former Granular programmer, hacked these apian robots.

Hated in the nation

Disillusioned with social media, Scholes sought to demonstrate the consequences of online hatred and mass hounding of people online by creating the hashtag “#DeathTo”, the use of which resulted in the death of the person to whom this hashtag was most frequently directed at each day. Gradually, the episode reveals that his true targets are social media users themselves who use this hateful hashtag. We learn that Scholes orchestrated a nation-wide massacre by programming the ADIs to fly into victims’ brains to cause the most excruciating death possible.

“Hated in the Nation” employs the metaphor of swarming bees to show how mob mentality produces destructive moral outrage on social media, much as an unthinking swarm strikes its target. It critiques the rapid circulation of unreflective emotional reactions online, a key feature of what keeps us engaged with such platforms. Social media companies are highly aware of user addictions that promote strong emotional reactions online[1]. Hence, the episode implicitly condemns these online platforms for allowing and propagating outrage that include the language of hate, threats of violence, and worse, predominantly for their own self-interest.

The AI nightmare presented by this episode provokes us to question: what are the consequences of the weaponisation of online outrage? How can facial recognition technologies take advantage of the vulnerabilities of mob behaviour on social media?

The mass surveillance of people is a central ethical concern in “Hatred in the Nation”. Granular Project Leader Rasmus Sjoberg reveals that the Government of the United Kingdom (UK) permitted the ADI project to run on the condition that the ADIs have advanced facial recognition technology, the visual feed of which the government national security services team can access. “Hated in the Nation” warns us that countries are on a trajectory to becoming despotic surveillance states with total control over data about us, without any guarantee of such information not being misused by the state and businesses. The episode emphasises the arrogance of institutions that trust their own systems of security and foregrounds the vulnerability of surveillance technology to misuse.

Millions of ADI were further subject to “function creep”, or “dual use”[2]—the widening of the use of a system or technology beyond the purpose for which it was originally intended—which in this case involves going beyond mere pollination and becoming a full social surveillance technology. Even though Scholes is the culprit of this Black Mirror episode, numerous hackers in the real world seek to enter into systems, exploiting them for their own self-gain, like through cyberwarfare and malicious hacking.[3] Those that create and employ AI themselves, namely governments and businesses, can abuse AI and data without our knowledge. “Hated in the Nation” suggests that unconstrained government surveillance presents too many dangers to be allowed free rein. Hence, these institutions need to be held accountable for their actions.

Hated in the nation beehive

Facial recognition is already used by the police in numerous cities, though this technology also is notable for having extremely high error rates. COMPAS, for instance, is an AI used by law enforcement to estimate people’s risk of recidivism. Even without the additional facial recognition technology, COMPAS is unfortunately biased against Black people and it has led to instances of racial and intersectional discrimination against Black defendants.[4] Well-intentioned manufacturers can create technologies that disproportionately affect people of colour and women.

We need increased transparency and accountability measures for AI,[5] rather than secretive government adoption of widespread AI surveillance programs, as represented in the dystopia of “Hated in the Nation”. Trade-secrecy laws that allow corporations to shield themselves from scrutiny need to be changed as these laws create a legal “black box”, much like the opaque algorithmic “black box” of technology companies.[6] Legislation should further require the review of biometric technologies for bias, civil rights, and privacy and concerns, such as the Algorithmic Impact Assessment.[7]

Autonomous technologies that are not operated by human beings may seem less vulnerable to greedy and power-seeking individuals; however, they are far from equitable.[8] The naïve belief in autonomous AI by the directors of Granular in “Hated in the Nation” shows us that even without Scholes’s direct control and orchestration of ADIs, autonomous technologies can cause havoc. The logistical efficiency of autonomous AI is purely a convenient and cost-effective method, rather than a way to better the lives of people. Technological neutrality is an illusion. AI is created by humans, who are by their very nature flawed and emotional beings; therefore, AI can operate in unjust, discriminatory ways to harm through its design. The problem is not with facial recognition technology itself, but rather is with its potential for misuse and errors when placed in the hands of highly fallible and small-minded human beings.[9]

References

[1] Olivia Solon, ‘Ex-Facebook president Sean Parker: site made to exploit human ‘vulnerability’’, The Guardian (Web Page, 10 November 2017) <https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerability-brain-psychology>.

[2] Kobi Leins, Jey Han Lau and Timothy Baldwin, ‘Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis?’ (2020) Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2908.

[3] Richard Nook, ‘Algorithms, Neural Networks and Other Machine Learning Techniques’ in Cliff Bertram, Asher Gibson, Adriana Nugent (eds), Closer to the Machine: Technical Social and Legal Aspects of AI (Office of the Victorian Information Commissioner, 2019) 79.

[4] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, ‘A Computer Program Used For Bail And Sentencing Decisions Was Labelled Biased against Blacks. It’s Actually Not that Clear.’, ProPublica (Web Page) <https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/>.

[5] Cathy O’Neill, Weapons of Maths Destruction: How Big Data Increases Inequality and Threatens Democracy (Allen Lane, 2016) ch 5.

[6] Kate Crawford, ‘Halt the Use of Facial-Recognition Technology until It Is Regulated: until Appropriate Safeguards Are in Place, We Need a Moratorium on Biometric Technology that Identifies Individuals, Says Kate Crawford.’, Nature (Web Page, 27 August 2019), <https://www.nature.com/articles/d41586-019-02514-7>.

[7] ‘State v Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing’ (2017) 130 Harvard Law Review 1530.

[8] Tim Miller, ‘Explainable Artificial Intelligence: What Were You Thinking?’ in Niels Wouters, Grant Blashki and Helen Sykes (eds), Artificial Intelligence: For Better or Worse (Future Leaders, 2019).

[9] Roger Brownsword, ‘From Erewhon to AlphaGo: For the Sake of Human Dignity, Should We Destroy the Machines’ (2017) 9(1) Law, Innovation and Technology 117.