Memoirs of the Trustworthy and Responsible AI Conference at Cambridge
Trustworthy and Responsible AI Conference was successfully held at the Downing College of Cambridge University on June 26, 2023.
Relive the inspiring keynote talks, insightful panel discussions, vibrant networking sessions, delightful drink and formal dinner. Experience the energy and passion of the participants as they shared their insights, research findings, and visions for a trustworthy and responsible AI future.
This short video takes a captivating journey through the highlights and memorable moments of the conference, offering a glimpse into the intellectually stimulating atmosphere that permeated every corner. Access to the photographs can be found here.
Watch the Sessions
Principal Partner : Kavli Centre for Ethics, Science and the Public
Other supporting partners : AI@Cam, Bitfount, Centre for Human-Inspired Artificial Intelligence (CHIA) and Cambridge Global Consulting Ltd (CGC).
Artificial Intelligence (AI) systems are increasingly being deployed in society and generating a profound impact on our daily lives. Despite the various advantages of these AI systems, it is also necessary to prevent their direct and indirect potential harm and risks to the users and society. Therefore, it is critical to ensure that these AI systems are trustworthy, responsible, safe and ethical, in particular in high-stake real-world applications. In the meanwhile, research-users are highly recommended to use AI systems with wisdom, prudence and integrity.
Trustworthy and responsible AI has been gaining significant attention from the government, industry and scientific communities. This conference will bring together speakers, delegates and research-users with diverse backgrounds, such as industry leaders, policymakers, government officials and frontier academic researchers and data scientists, to share cutting-edge knowledge, inspiring findings and views in this timely and trending domain. Such an occasion will facilitate knowledge dissemination with a wide range of research-users, help identification of challenges and mutual strengths, create opportunities to build and strengthen partnerships and collaborations, take the conversation further through highly interactive discussion and networking activities.
Opening and Introductions
Session 1: Chair - Dr Lefan Wang
- Prof. Zoe Kourtzi - Robust and interpretable AI-guided tools for early dementia prediction
- Dr David Krueger - Baby Steps Towards Safe and Trustworthy AI in Large Scale Deep Learning
- Dr Richard Milne - AI, trust and the public
Session 1 Panel Discussion : Chair - Dr Richard Milne
Challenges and issues in existing AI Systems: opaqueness, bias, fragility, privacy invasion, inefficiency, lack of ethical guidelines,..
- Prof. Alexandra Brintrup, Prof. Helena Earl, Dr David Krueger, Dr Richard Milne and Dr Sebastian Pattinson
Session 2: Chair - Dr. Vihari Piratla
- Prof. Alessandro Abate - Certified learning, or learning for verification?
- Dr Lucas Dixon - Large Language Models, Prototypes & Responsibility
- Prof. Miguel Rodrigues - Fair Federated Learning
- Dr Blaise Thomson - How the real world impacts the use of trustworthy & responsible AI
Session 2 Panel Discussion : Chair - Dr. Carolyn Ashurst
Measures and Techniques to Achieve Trustworthy and Responsible AI - Enhancing transparency, interpretability, fairness, robustness, privacy protection, efficiency, regulation and accountability, Human-AI collaboration, Continual Monitoring,...
- Prof. Alessandro Abate, Dr. Carolyn Ashurst, Dr Lucas Dixon , Prof. Miguel Rodrigues, Dr Blaise Thomson and Dr Miri Zilka
Wrap up and thank you
Session 3: Poster session and networking