Seminar: Explainable Artificial Intelligence: how to build interpretable and trustworthy intelligent systems

Marco Lippi, Dipartimento di Ingegneria dell’Informazione (DINFO), University of Florence
2025, May 26th, 10:00 am
The adoption of artificial intelligence (AI) is rapidly expanding across various sectors, including healthcare, finance, transportation, manufacturing, and entertainment. Its growing popularity in recent years is largely due to its ability to automate complex tasks, such as processing large volumes of data or identifying patterns, and its increasing accessibility to the public.  However, despite the widespread use of AI, many systems still function in ways that are opaque to all parties involved: the developers who create the systems, the organizations that implement them, and the individuals impacted by their use. In many cases, even the providers cannot fully explain how their AI systems arrive at specific decisions or outcomes. This issue is commonly known as the “black box” problem. Explainable Artificial Intelligence (XAI) seeks to address this challenge by enabling AI systems to offer clear, understandable explanations for their actions and decisions. Its main objective is to make the behavior of AI systems transparent and comprehensible to humans by revealing the underlying mechanisms that drive their decision-making processes.
Considering these challenges, Marco Lippi, Professor of Computer Engineering at the University of Florence, will present methods for designing interpretable and trustworthy intelligent systems using Explainable Artificial Intelligence approaches.
More info: Alessandro Montaghi (alessandro.montaghi@cnr.it)