Ensuring Trustworthy AI: From Principles to Action
AI ethics are increasingly recognised as important if not critical. We have many offerings of AI ethical codes and frameworks. Central features of ethical AI are that it is transparent and not biased. Explainability, accountability and safety are also often listed. But what do these kinds of features mean when translated into action? How are they understood when dealing with a machine? How do we create trust in citizens that these principles are effective in protecting them in their use of AI, and in preserving the fundamental social values and rights?
This seminar explores these issues with renowned expert in social and ethical AI, Virginia Dignum, along with a panel discussion including leading industry innovators: Fiona Borrelli (Sargon), Jess Perrin (InfoXchange) and Antony Ugoni (Seek).
Professor Virginia Dignum, Department of Computing Science
Professor Virginia Dignum
Department of Computing Science
I am Professor at the Department of Computing Science at Umeå University, Sweden where I lead the research group Social and Ethical Artificial Intelligence. I am a Fellow of the European Artificial Intelligence Association (EURAI) and I am also an associated with the Faculty Technology Policy and Management at the Delft University of Technology. See here what I do there. Given the increasing importance of understang the impact of AI at societal, ethical and legal level, I am actively involved in several international initiatives on policy and strategy guidelines for AI research and applications. As such I am a member of the European Commission High Level Expert Group on Artificial Intelligence, of the IEEE Initiative on Ethics of Autonomous Systems, the Delft Design for Values Institute, the European Global Forum on AI (AI4People), the Responsible Robotics Foundation, the Dutch AI Alliance on AI (ALLAINL) and of the ADAAI foundation.