Bahsoon, RamiAlghanmi, Hanouf2024-11-142024-07https://hdl.handle.net/20.500.14154/73585Blockchain smart contracts have emerged as a transformative technology, enabling the automation and execution of contractual agreements. These self-executing software programs leverage blockchain's distributed and immutable nature to eliminate the need for third-party intermediaries. However, this new paradigm of automation and authority introduces a complex environment with technical intricacies that users are expected to understand and trust. The irreversible nature of blockchain decisions exacerbates these issues, as any mistake or misuse cannot be rectified. Current smart contract designs often neglect human-centric approaches and the exploration of trustworthiness characteristics, such as explainability. Explainability, a renowned requirement in Explainable Artificial Intelligence (XAI) aimed at enhancing human understandability, transparency and trust, has yet to be thoroughly examined in the context of smart contracts. A noticeable gap exists in the literature concerning the early development of explainability requirements, including established methods and frameworks for addressing requirements analysis phases, design principles, evaluation of their necessity and trade-offs. Therefore, this thesis aims to advance the field of blockchain smart contract systems by introducing explainability as a design concern, fundamentally prompting requirements engineers and designers to cater to this concern during the early development phases. Specifically, we provide guidelines for explainability requirements analysis, addressing what, why, when and to whom to explain. We propose design principles for integrating explainability into the early stages of development. To tailor explainability further, we propose a human-centred framework for determining information requirements in smart contract explanations, utilising situational awareness theories to address the `what to explain' aspect. Additionally, we present `explainability purposes' as an integral resource in evaluating and designing explainability. Our approach includes a novel evaluation framework inspired by the metacognitive explanation-based theory of surprise, addressing the `why to explain' aspect. The proposed approaches have been evaluated through qualitative validations and expert feedback. We have illustrated the added value and constraints of explainability requirements in smart contracts by presenting case studies drawn from literature, industry scenarios and real-world projects. This study informs requirements engineers and designers regarding how to elicit, design and evaluate the need for explainability requirements, contributing to the advancement of the early development of smart contracts.338enSoftware engineeringBlockchainsmart contractsexplainabilityrequirementshuman-centredExplainability Requirement in Blockchain Smart Contracts: A Human-Centred ApproachThesis