Explainable Goal Recognition Systems
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Saudi Digital Library
Abstract
This thesis explores human-centered approaches to explaining and understanding why goal recognition (GR) agents predict specific goal hypotheses. Goal recognition is the process of inferring an agent’s hidden goal from its observed behaviour, playing a crucial role in AI with various practical applications.
Since the field’s inception, understanding the behaviour, decisions, and actions of ar- tificial intelligence (AI) agents has been a core focus of research. As these systems grow increasingly complex, their reasoning processes often become opaque to end users, raising significant challenges in high-stakes and collaborative environments. Lack of transparency can undermine trust and hinder effective decision-making. Enhancing au- tonomous agents’ explainability is vital, enabling users to comprehend and trust the reasoning behind these systems’ predictions.
Understanding how humans generate, select, and convey explanations can serve as a ba- sis for developing effective explainable agents. Explaining the behaviour and predictions of GR agents engaged in sequential decision-making presents unique challenges. Tra- ditional approaches to explainability often focus on aligning an agent’s behaviour with an observer’s expectations or making the reasoning behind decisions more transparent. Building on insights from cognitive science and philosophy, this thesis delves deeper into understanding the nature of explanations within human cognition.
The central contribution of this work is the introduction of the eXplainable Goal Recog- nition (XGR) model, a novel framework that generates counterfactual explanations for GR agents. The XGR model addresses “why” and “why not” questions by leverag- ing insights from two human-agent studies and proposing a conceptual framework for human-centred explanations of GR. Building on these foundations, the thesis extends the XGR model by introducing the Hypothesis-Driven XGR model, which integrates the emerging decision-making paradigm of Evaluative AI. Our empirical evaluations demon- strate that the proposed models enhance trust in GR agents and effectively support user decision-making, outperforming baseline approaches across key domains.
This research presents the first systematic investigation into human-centred explanations for goal recognition systems in sequential decision-making domains. It advances the field of explainable AI and provides practical methods to improve user understanding and trust in GR systems.
Description
Keywords
Artificial Intelligence, Explainable AI, Goal Recognition