Explainable AI. Creator: Avery Choi. All rights reserved.

Webinar

Wednesday, 25. May 2022 3:30 pm – 4:30 pm Save in my calendar

Webinar

Explainable AI

09:30-10:30 CEST GMT+2 / 15:30-16:30- Singapore GMT +8 / 17:30-18:30 Melbourne GMT+10

Artificial intelligence (AI) algorithmic designs may involve algorithms such as neural networks and machine learning mechanisms, which can enhance robust power and predictive accuracy of the applications. However, how AI systems arrive at their decisions may appear opaque and incomprehensible to general users, non-technical managers, or even technical personnel. Algorithmic design may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers. These initiatives seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups.

Speakers:

Prof Matthias C. Kettemann
Head of research programme, Hans-Bredow-Institute / HIIG, Germany

Matthias C. Kettemann is Professor of Innovation, Theory and Philosophy of Law and head of the Department for Theory and Future of Law at the University of Innsbruck, Austria and holds research leadership positions at the Leibniz Institute for Media Research | Hans-Bredow-Institute, Hamburg, and the Humboldt Institute for Internet and Society, Berlin.

Prof Liz Sonenberg
Pro Vice-Chancellor (Systems Innovation), University of Melbourne, Australia

Liz Sonenberg is a Professor of Information Systems at the University of Melbourne and holds the Chancellery role of Pro Vice Chancellor (Systems Innovation). Liz is a member of the Advisory Board of AI Magazine and of the Standing Committee of the One Hundred Year Study on Artificial Intelligence (AI100). Her currently active research projects include “Strategic Deception in AI” and “Explanation in AI”.

Dr Brian Y. Lim
Assistant Professor of Computer Science, National University of Singapore, Singapore

Brian Lim, PhD, is an Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS). He leads the NUS Ubicomp Lab focusing on research on ubiquitous computing and explainable artificial intelligence for healthcare, wellness and smart cities. His research explores how to improve the usability of explainable AI by modeling human factors, and applying AI to improve clinical decision making and user engagement towards healthier lifestyles. He has been serving on the editorial board of PACM IMWUT and program committees for CHI and AAAI. He received a B.S. in engineering physics from Cornell University and a Ph.D. in human-computer interaction from Carnegie Mellon University.

Moderator: Mr Kal Joffres, CEO and co-founder of Tandemic


Contacts for enquiry:

Lucia Siu
Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue
Email: Lucia.Siu [at] hk.boell.org

Christina Schönleber
Senior Director, Policy and Research Programs, APRU
Email: policyprograms [at] apru.org

Timezone
HKT
Part of the series
Regulating AI: Debating Approaches and Perspectives from Asia and Europe
Address
➽ Online Event
Organizer
Heinrich-Böll-Stiftung Hongkong
Language
English