How explanation affects user’s trust for rare disease diagnosis AI-related systems
dc.contributor.advisor | Syuan Liu | |
dc.contributor.author | AHMED HAMAD ALBELADI | |
dc.date | 2021 | |
dc.date.accessioned | 2022-05-28T17:16:45Z | |
dc.date.available | 2022-05-28T17:16:45Z | |
dc.degree.department | Cyber Security | |
dc.degree.grantor | Swansea University | |
dc.description.abstract | This quantitative research studied how explanation affects user’s trust for rare disease diagnosis AI-related systems. Previous studies on the impact of Artificial Intelligence for diagnosis established that AI-based decision support systems in the healthcare practice assist decision making in different ways, such as in dealing with patient data, interpreting diseases diagnosis, predicting the health and medical conditions of patients, and in making conclusions regarding clinical findings. This study has contributed to the expanding knowledge on the importance of AI technology in healthcare. While artificial intelligence systems have often been used as decision support systems to predict the progression of diseases and patients’ response to treatments concerning healthcare and diseases diagnosis, explanations in AI have been identified as a key impediment to the trust and transparency of users for medical diagnosis, hence the rationale for this study. This study explored a five-hypothesis review to understand the topic under review. It focused on narrow explanation, inflexible explanation, insensitive explanation, unresponsive explanation, and extensible explanation to illustrate their impact on user’s trust for rare disease diagnosis. The quantitative data for the research was collected from a sample of 100 healthcare workers through a survey method using an online questionnaire. The data was analysed using the correlation analysis by using statistical package for social science (SPSS) for the descriptive analysis. The study used regression analysis to model the relationship between trust (dependent variable) and the other variables (independent variables). According to the results, Inflexible (r= -0.796), Unresponsive (-0.760), and Insensitive (-0.788) explanations were negatively related with Trust. Narrow (r = 0.895) and Extensible (0.894.) were positively related with Trust. Based on these results, narrow and extensible explanation provide users with new and valuable information concerning rare diseases. In this case, the trust level of the users increases considerably. In conclusion, users who realise that they have learnt something or benefited from the knowledge base would be more comfortable with the system. | |
dc.identifier.uri | https://drepo.sdl.edu.sa/handle/20.500.14154/37461 | |
dc.language.iso | en | |
dc.title | How explanation affects user’s trust for rare disease diagnosis AI-related systems | |
sdl.thesis.level | Master | |
sdl.thesis.source | SACM - United Kingdom |