Title: Making AI Explainable for Novice Technology Users in Low-Resource Settings

YouTube

Abstract: As researchers and technology companies rush to develop artificial intelligence (AI) applications that aid the health of marginalized communities, it is critical to consider the needs of the community health workers (CHWs) who will be increasingly expected to operate tools that incorporate these technologies. My previous work has shown that these users have low levels of AI knowledge, form incorrect mental models about how AI works, and at times, may trust algorithmic decisions more than their own. This is concerning, given that AI applications targeting the work of CHWs are already in active development and early deployments in low-resource healthcare settings have already reported failures that created additional workflow inefficiencies and inconvenienced patients.
Explainable AI (XAI) can help avoid such pitfalls, but nearly all prior work has focused on users that live in relatively resource-rich settings (e.g., the US and Europe) and that arguably have substantially more experience with digital technologies such as AI. My research works to develop XAI for people with low levels of formal education and technical literacy, with a focus on healthcare in low-resource domains. This work involves demoing interactive prototypes with CHWs to understand what aspects of model decision-making need to be explained and how they can be explained most effectively, with the goal of improving how current XAI methods serve novice technology users.

About AI4PAN Artificial Intelligence for Pandemics Seminar Series centred at UQ

Welcome to AI4PAN, the Artificial Intelligence for Pandemics group centered at The University of Queensland (UQ). The group's focus is the application of data science, machine learning, statistical learning, applied mathematics, computation, and other "artificial intelligence" techniques for managing pandemics both at the epidemic and clinical level.