I am a graduate researcher in the University of Melbourne node of the Australian Research Council Centre of Excellence for Automated Decision-Making and Society. My research, which will examine the human intervention in repositories of decision-making powers’ regulatory response to ADM, is being supervised by Dr Jake Goldenfein, Professor Kristen Rundle, and Professor Andrew Kenyon.
I graduated in the top ten percent of my undergraduate degree in Law from UCC School of Law, Cork and Temple Law School, Philadelphia, and graduated cum laude with the additional honours programme the ‘Academic Excellence Track’ from my master’s degree in public international law from Amsterdam Law School, Amsterdam.
I have also worked as a research analyst on the Horizon Europe projects ‘CEASEFIRE’ which concerns AI technologies to combat illicit firearms trafficking, and ‘EURMARS’ which is developing an advanced platform to improve European border security. My previous role in CEASEFIRE involved leading contributions to all legal, ethical, personal data, fundamental rights, and privacy aspects.
Automated Decision Making; an examination into the human intervention in repositories of decision-making powers’ regulatory response to ADM.
As repositories of decision-making power increasingly employ Automated Decision Making (ADM) systems, how automated decisions can be both explained and contested becomes pertinent. Emphasis has been very much placed on both the concept of explainable artificial intelligence ‘xAI’, as well as human oversight concepts like ‘human-in-the-loop’ (HITL), as forms of regulatory strategies to add an element of ‘humanness’ to automated decisions. This ‘human intervention’ is seen as a key factor to promote meaningful contestation, help explain decisions, and encourage societal and institutional trust in ADM for public governance.
However, these regulatory and supervisory strategies, and specifically HITL, places considerable reliance on the human’s intervention into these automated decision processes. The human-centred reliance on HITL provides little understanding as to what exactly is a decision, when exactly is a decision made, or even what the function of the human is when it interacts in a physical form in a digital decision-making process. Inserting a human through HITL instead potentially acts as a ‘rubber stamp’ to humanise, and, therefore, make legally, socially, and institutionally acceptable automated decision-making in government. As such, the key research question focuses on the value, function, and impact HITL has as a regulatory strategy in ADM. Specifically, whether HITL, as a somewhat uncontested concept in the governance domain, can in fact regulate the malleable processes of ADM.
- Administrative/Public Law
- Regulating emerging technologies
- AI and robots
- Law and Society