Explainability Requirement in Blockchain Smart Contracts: A Human-Centred Approach

dc.contributor.advisorBahsoon, Rami
dc.contributor.authorAlghanmi, Hanouf
dc.date.accessioned2024-11-14T12:23:34Z
dc.date.issued2024-07
dc.description.abstractBlockchain smart contracts have emerged as a transformative technology, enabling the automation and execution of contractual agreements. These self-executing software programs leverage blockchain's distributed and immutable nature to eliminate the need for third-party intermediaries. However, this new paradigm of automation and authority introduces a complex environment with technical intricacies that users are expected to understand and trust. The irreversible nature of blockchain decisions exacerbates these issues, as any mistake or misuse cannot be rectified. Current smart contract designs often neglect human-centric approaches and the exploration of trustworthiness characteristics, such as explainability. Explainability, a renowned requirement in Explainable Artificial Intelligence (XAI) aimed at enhancing human understandability, transparency and trust, has yet to be thoroughly examined in the context of smart contracts. A noticeable gap exists in the literature concerning the early development of explainability requirements, including established methods and frameworks for addressing requirements analysis phases, design principles, evaluation of their necessity and trade-offs. Therefore, this thesis aims to advance the field of blockchain smart contract systems by introducing explainability as a design concern, fundamentally prompting requirements engineers and designers to cater to this concern during the early development phases. Specifically, we provide guidelines for explainability requirements analysis, addressing what, why, when and to whom to explain. We propose design principles for integrating explainability into the early stages of development. To tailor explainability further, we propose a human-centred framework for determining information requirements in smart contract explanations, utilising situational awareness theories to address the `what to explain' aspect. Additionally, we present `explainability purposes' as an integral resource in evaluating and designing explainability. Our approach includes a novel evaluation framework inspired by the metacognitive explanation-based theory of surprise, addressing the `why to explain' aspect. The proposed approaches have been evaluated through qualitative validations and expert feedback. We have illustrated the added value and constraints of explainability requirements in smart contracts by presenting case studies drawn from literature, industry scenarios and real-world projects. This study informs requirements engineers and designers regarding how to elicit, design and evaluate the need for explainability requirements, contributing to the advancement of the early development of smart contracts.
dc.format.extent338
dc.identifier.urihttps://hdl.handle.net/20.500.14154/73585
dc.language.isoen
dc.publisherThe university of Birmingham
dc.subjectSoftware engineering
dc.subjectBlockchain
dc.subjectsmart contracts
dc.subjectexplainability
dc.subjectrequirements
dc.subjecthuman-centred
dc.titleExplainability Requirement in Blockchain Smart Contracts: A Human-Centred Approach
dc.typeThesis
sdl.degree.departmentCollege of Engineering and Physical Sciences
sdl.degree.disciplineSoftware Engineering
sdl.degree.grantorThe university of Birmingham
sdl.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
SACM-Dissertation.pdf
Size:
4.55 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.61 KB
Format:
Item-specific license agreed to upon submission
Description:

Copyright owned by the Saudi Digital Library (SDL) © 2025