Artificial intelligence is already in many homes through the device of digital personal assistants. Digital personal assistants are small, apparently innocuous devices that sit within the home and are connected to the internet and other smart devices. Examples include Alexa (Amazon), Siri (Apple), M (Facebook) and Google Assistant (Google). Digital personal assistants draw on so-called artificial intelligence technology, primarily natural language generation and processing, to interact with human users through voice commands and responses.
Digital personal assistants are promoted as a friendly, labour saving and inclusive device. Scholars have questioned whether these devices are really so benign and argued that they should be scrutinised from more robust ethical, political and legal perspectives.
This project investigates Australian’ uses and attitudes to digital home assistants, including in relation to efficacy and safety, privacy, equity and choice, and also the changing nature of civic engagement as mediated by digital devices. The project team combines expertise in media and communications, technology, law and regulation to provide cross disciplinary perspectives on digital home assistants. It seeks to inform regulatory and policy responses to the such devices. The project further aims to influence technological design to promote more human centred and privacy enhanced outcomes for beneficial uses of digital assistants.
This project receives funding from the Centre for Media and Communications Law.
 Judith Shulevitz, ‘Alexa, Should we trust You?’ (2018) The Atlantic https://www.theatlantic.com/magazine/archive/2018/11/alexa-how-will-you-change-us/570844/.
 See in particular: Thao Phan, ‘ Amazon Echo and the aesthetics of whiteness.’ (2019) 5(1) Catalyst: Feminism, Theory, Technoscience 1; Yolande Strengers and Jenny Kennedy, The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot (MIT Press, 2020). Also Maurice E Stucke and Ariel Ezrachi, ‘How Digital Assistants Can Harm our Economy, Privacy, and Democracy’ (2017) 32 Berkeley Technology Law Journal 1239. Cf John Danaher, ‘Towards and Ethics of AI Assistants: An Initial Framework’ (2018) 31 Philosophy & Technology 629.