Explainable Goal Recognition Systems

dc.contributor.advisorVered, Mor
dc.contributor.authorAlshehri, Abeer
dc.date.accessioned2025-09-07T05:06:29Z
dc.date.issued2025
dc.description.abstractThis thesis explores human-centered approaches to explaining and understanding why goal recognition (GR) agents predict specific goal hypotheses. Goal recognition is the process of inferring an agent’s hidden goal from its observed behaviour, playing a crucial role in AI with various practical applications. Since the field’s inception, understanding the behaviour, decisions, and actions of ar- tificial intelligence (AI) agents has been a core focus of research. As these systems grow increasingly complex, their reasoning processes often become opaque to end users, raising significant challenges in high-stakes and collaborative environments. Lack of transparency can undermine trust and hinder effective decision-making. Enhancing au- tonomous agents’ explainability is vital, enabling users to comprehend and trust the reasoning behind these systems’ predictions. Understanding how humans generate, select, and convey explanations can serve as a ba- sis for developing effective explainable agents. Explaining the behaviour and predictions of GR agents engaged in sequential decision-making presents unique challenges. Tra- ditional approaches to explainability often focus on aligning an agent’s behaviour with an observer’s expectations or making the reasoning behind decisions more transparent. Building on insights from cognitive science and philosophy, this thesis delves deeper into understanding the nature of explanations within human cognition. The central contribution of this work is the introduction of the eXplainable Goal Recog- nition (XGR) model, a novel framework that generates counterfactual explanations for GR agents. The XGR model addresses “why” and “why not” questions by leverag- ing insights from two human-agent studies and proposing a conceptual framework for human-centred explanations of GR. Building on these foundations, the thesis extends the XGR model by introducing the Hypothesis-Driven XGR model, which integrates the emerging decision-making paradigm of Evaluative AI. Our empirical evaluations demon- strate that the proposed models enhance trust in GR agents and effectively support user decision-making, outperforming baseline approaches across key domains. This research presents the first systematic investigation into human-centred explanations for goal recognition systems in sequential decision-making domains. It advances the field of explainable AI and provides practical methods to improve user understanding and trust in GR systems.
dc.format.extent147
dc.identifier.urihttps://hdl.handle.net/20.500.14154/76354
dc.language.isoen
dc.publisherSaudi Digital Library
dc.subjectArtificial Intelligence
dc.subjectExplainable AI
dc.subjectGoal Recognition
dc.titleExplainable Goal Recognition Systems
dc.typeThesis
sdl.degree.departmentComputing and Information System
sdl.degree.disciplineArtificial Intelligence
sdl.degree.grantorUniversity of Melbourne
sdl.degree.nameDoctor of Philosophy
sdl.thesis.sourceSACM - Australia

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
SACM-Dissertation.pdf
Size:
8.39 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Collections

Copyright owned by the Saudi Digital Library (SDL) © 2025