The ethics of artificial intelligence

By Cat Knights, Senior Grants Officer, Melbourne Law School

Two exciting new Melbourne Law School research initiatives are set to tackle the ethical and social policy challenges of digital technologies, particularly artificial intelligence and automated decision-making.

Artificial intelligence is becoming more and more integral to everyday life.

Many common applications of this technology appear entirely innocuous. Algorithms may suggest products to buy, deliver interesting news, help to navigate traffic, find information and identify the need for an umbrella on any given day. On top of these common uses, artificial intelligence also assists military operations, provides highly effective security and surveillance technology, curates advertising to manipulate the shopping habits of consumers, and may even determine whether or not a bank loan is approved.

However, the basis on which these kinds of decisions are made is typically opaque and risks being unavailable for scrutiny, review or challenge. How do we know if decisions are biased or discriminatory, whether recommendations are fair and fit for purpose or whether people are being harmed by the online material with which they are targeted? Are current consent and data privacy laws sufficient to protect people’s privacy? What is the level of care expected from online advisers, or those making health recommendations? And, importantly, how or to whom do we complain about these problems?

MLS has been working to address these questions and is introducing two important new research initiatives to speed up the work. Firstly, researchers at MLS, the School of Computing and Information Systems (CIS) in the Melbourne School of Engineering, and the Faculty of Arts are set to launch the Centre for Artificial Intelligence and Digital Ethics (CAIDE) – a new interdisciplinary research body at the University of Melbourne. Secondly, MLS and CIS academics are collaborating with several other Australian universities and global industry partners on the Australian Research Council Centre of Excellence for Automated Decision-Making and Society.

“Technology is advancing rapidly, and it is important to ensure that it is being used in ways that are fair, transparent and accountable,” says Professor Jeannie Paterson, Co-Director of CAIDE.

“This requires a combination of skills: technical expertise, policy judgment and insight into effective regulatory design.” CAIDE will bring experts in these disciplines together, into the same space, to tackle these problems.

It will also engage with the wider University and the community to develop understanding about AI and related technologies, and the values that are important to us in responding to these advances.

CAIDE will be involved in research, teaching, engagement in public debate, and building a network of high-level national and international partners.

It will be located at the upcoming Melbourne Connect, the collaborative innovation centre currently being constructed on the former Royal Women’s Hospital site, on the corner of Swanston and Grattan Streets.

“The location of the Centre is important,” says Paterson. “We will be co-located with cutting-edge technology companies and others working in this field, so it will allow us to properly model the community-engaged, collaborative and high-quality research and thought leadership that CAIDE represents.”

The work of CAIDE will revolve around the four pillars of the Centre: fairness, accountability, transparency and privacy. Importantly, CAIDE will have a particularly Australian focus, looking into applied problems that are unique to the Australian context.

Professor Megan Richardson will be leading MLS’s contribution to the Australian Research Council (ARC) Centre of Excellence for Automated Decision-Making and Society, together with MLS colleagues Professors Christine Parker and Andrew Kenyon and Associate Professor Andy Roberts. Richardson will look to expand on her work on privacy and data protection.

“All of us are already greatly involved in related research and offering policy recommendations,” she notes. “The ARC funding will allow us to expand the breadth and depth of empirical research in the field of automated decision-making, creating a stronger evidence-base from which to guide the development of necessary social policy in this field.”

For CAIDE, one key research focus is ‘Fairness and Anti-Discrimination in Automated Decision-Making’. CIS’s Associate Professor Tim Miller, Co-Director of CAIDE, explains:

In areas such as law, health, and finance, there is a push for more decisions to be made automatically, or at least, human-made decisions are being increasingly informed by automated tools. Often, these decisions are made about individuals and groups, and can have a major effect on human lives, such as predicting recidivism, assessing job applications, offering health services, or assessing loan applications.

This leads to the possibility of biased decisions that discriminate against certain individuals and groups.

“It is alarmingly common that algorithms produce biased results, based on ethnic background, gender or sexuality. Therefore, this project aims to study algorithms and models to detect and potentially remediate this type of bias, but, critically, in a manner that is informed by and consistent with existing Australian legal regimes and responsive to future challenges.”

Detecting and remediating bias is a significant technical challenge, and the Australian law in this area is complex. This means any response must be designed by an interdisciplinary team.

“The legal standards may exist, but the current tools to implement them are outdated when it comes to regulating something like artificial intelligence,” says Miller. “They need to be re-evaluated for the current and future world, as these technologies and practices are constantly developing. CAIDE will be perfectly poised to take on this challenge.”

Banner Image Credit: Pixabay

This article originally appeared in MLS News, Issue 22, November 2019