The providers of AI tools are currently not legally obliged to make their tools unusable for fraud or deception. In law, there is no duty of care to the world at large to prevent someone else's fraud.

Why this matters

AI is increasingly everywhere, and everyone should be able to participate in debate about the policy and norms that should govern this technology and its effect on us, and on society.

This has to be a cross disciplinary endeavour with knowledge and expertise shared by engineers, computer scientists, medical researchers, humanities scholars and lawyers.

So, for example, I am working with medical researchers and computer scientist professionals to understand more about the role of AI in supporting professional services, such as chatbots in supporting mental health or guiding consumers in legal disputes.

My passion is to demystify AI and the regulation of AI.

Our impact

My current research focuses on 'misleading AI'.

I have been looking at deepfakes and the harms they cause for people, businesses and society, as well the over-promoting of the capacity of AI tools.

Generative AI tools can create image, video and voice replicas of real people saying and doing things they never would have done. And these tools. And these tools are becoming increasingly easy access and use.

I am interested in the legal or technical guardrails that might be used in stemming the tide of deepfake fraud, scams and deception. It is a cat and mouse game, and no one response is likely to be entirely effective.

However, the response doesn't need to be perfect: slowing down AI generated fakes and frauds can still reduce harm.

I've recently been fortunate to be a member of the Commonwealth Government's temporary AI expert advisory group, considering mandatory guardrails for sale and responsible AI.

I have also been convening a series of seminars, workshops and materials on AI, law and policy - you can find more about this on the CAIDE website.

A collaboration between engineering, computer science, medicine, arts, education and law, the Centre for Artificial Intelligence and Digital Ethics (CAIDE) brings together experts from across the University of Melbourne to facilitate cross-disciplinary research, teaching and leadership on the ethical, technical, and legal issues relating to AI.

CAIDE's research seeks to answer the sorts of questions we might be faced with in the not-too-distant future, like:

  • As we program AI, can we ensure today's inequalities don't become entrenched in algorithmic biases?
  • In a world where tech evolves so quickly, how can public and private regulators keep up?
  • Is it ethical to outsource healthcare and education to chatbots and AI/

Researcher

Profile picture of Jeannie Paterson

Jeannie Paterson

Professor and Director, Centre for AI and Digital Ethics