Measuring Human’s Trust in Robots in Real-time During Human-Robot Interaction
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Swansea University
Abstract
This thesis presents a novel, holistic framework for understanding, measuring, and optimising human trust in robots, integrating cultural factors, mathematical modelling, physiological indicators, and behavioural analysis to establish foundational methodologies for trust-aware robotic systems. Through this comprehensive approach, we address the critical challenge of trust calibration in human-robot interaction (HRI) across diverse contexts.
Trust is essential for effective HRI, impacting user acceptance, safety, and overall task performance in both collaborative and competitive settings. This thesis investigated a multi-faceted approach to understanding, modelling, and optimising human trust in robots across various HRI contexts. First, we explored cultural and contextual differences in trust, conducting cross-cultural studies in Saudi Arabia and the United Kingdom. Findings showed that trust factors such as controllability, usability, and risk perception vary significantly across cultures and HRI scenarios, highlighting the need for flexible, adaptive trust models that can accommodate these dynamics.
Building on these cultural insights as a critical dimension of our holistic trust framework, we developed a mathematical model that emulates the layered framework of trust (initial, situational, and learned) to estimate trust in real-time. Experimental validation through repeated interactions demonstrated the model's ability to dynamically calibrate trust with both trust perception scores (TPS) and interaction sessions serving as significant predictors. This model showed promise for adaptive HRI systems capable of responding to evolving trust states.
To further enhance our comprehensive trust measurement approach, this thesis explored physiological behaviours (PBs) as objective indicators. By using electrodermal activity (EDA), blood volume pulse (BVP), heart rate (HR), skin temperature (SKT), eye blinking rate (BR), and blinking duration (BD), we showed that specific PBs (HR, SKT) vary between trust and distrust states and can effectively predict trust levels in real-time. Extending this approach, we compared PB data across competitive and collaborative contexts and employed incremental transfer learning to improve predictive accuracy across different interaction settings.
Recognising the potential of less intrusive trust indicators, we also examined vocal and non-vocal cues—such as pitch, speech rate, facial expressions, and blend shapes—as complementary measures of trust. Results indicated that these cues can reliably assess current trust states in real-time and predict trust development in subsequent interactions, with trust-related behaviours evolving over time in repeated HRI sessions. Our comprehensive analysis demonstrated that integrating these expressive behaviours provides quantifiable measurements for capturing trust, establishing them as reliable metrics within real-time assessment frameworks.
As the final component of our integrated trust framework, this thesis explored reinforcement learning (RL) for trust optimisation in simulated environments. Integrating our trust model into an RL framework, we demonstrated that dynamically calibrated trust can enhance task performance and reduce the risks of both under and over-reliance on robotic systems.
Together, these multifaceted contributions advance a holistic understanding of trust measurement and calibration in HRI, encompassing cultural insights, mathematical modelling, physiological and expressive behaviour analysis, and adaptive control. This integrated approach establishes foundational methodologies for developing trust-aware robots capable of enhancing collaborative outcomes and fostering sustained user trust in real-world applications. The framework presented in this thesis represents a significant advancement in creating robotic systems that can dynamically adapt to human trust states across diverse contexts and interaction scenarios.
Description
Keywords
Human-Robot Interaction, Trust, Modelling, Measurement, Physiological Behaviour, Vocal and non-vocal cues, Machine Learning, Reinforcement Learning, Optimisation