A PROBABILISTIC CHECKING MODEL FOR EFFECTIVE EXPLAINABILITY BASED ON PERSONALITY TRAITS
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Saudi Digital Library
Abstract
It is becoming increasingly important for an autonomous system to be able to explain its
actions to humans in order to improve trust and enhance human-machine collaboration.
However, providing the most appropriate kind of explanations – in terms of length, format,
and presentation mode of explanations at the proper time – is critical to enhancing their
effectiveness. Explanation entails costs, such as the time it takes to explain and for humans
to comprehend and respond. Therefore, the actual improvement in human-system tasks
from explanations (if any) is not always obvious, particularly given various forms of
uncertainty in knowledge about humans.
In this research, we propose an approach to address this issue. The key idea is to provide a
structured framework that allows a system to model and reason about human personality
traits as critical elements to guide proper explanation in human and system collaboration.
In particular, we focus on the two concerns of modality and amount of explanation in order
to optimize the explanation experience and improve overall system-human utility. Our models are based on probabilistic modeling and analysis (PRISM-games) to determine at
run time what the most effective explanation under uncertainty is. To demonstrate our
approach, we introduce a self-adaptative system called Grid – a virtual game – and the
Stock Prediction Engine (SPE), which allows an automated system and a human to
collaborate on the game and stock investments. Our evaluation of these exemplars, through
simulation, demonstrates that a human subject’s performance and overall human-system
utility is improved when considering the psychology of human personality traits in
providing explanations.