Baber,ChrisBubakr, Hebah Abdullah2023-07-202023-07-202023https://hdl.handle.net/20.500.14154/68672In large companies, artificial intelligence (AI) is being used to optimise workflow and ensure efficiency. An assumption is that the AI system should remain unaffected by bias or prejudices to contribute to providing fairer results. For example, in the recruitment process, AI ensures that each applicant is judged by the exact criteria in the job description. Our results suggest otherwise; therefore, we wondered whether the problem of bias extends from the training data (which, replicates existing inequalities in organisations) to the design of the AI systems themselves. These learning systems are dependent on knowledge elicited from human experts. However, if the systems are trained to perform and think in the same way as a human, most of the tools would use unacceptable criteria because people consider many personal parameters that a machine should not use. The question remains whether the potential impact of bias is considered in the design of an AI system. In this thesis, several experiments are conducted to study unconscious bias in the application of AI with the aid of two qualitative frameworks and two quantitative questionnaires. We first explore the unconscious bias in user interface designs, then examine programmers’ understanding of bias when creating a purposely biased machine using medical databases. A third study addresses the effect of AI recommendations on decision-making, and finally, we explore whether user acceptance is dependent on the type of AI recommendation, testing various suggestions. This project raises awareness of how the developers of AI and machine learning might have a narrow perspective of ‘bias’ as a statistical problem rather than a social or ethical problem. This limitation is not because they are unaware of these wider concerns but because the requirements relating to the management of data and the implementation of algorithms might restrict their focus to technical challenges. Consequently, bias outcomes can be produced unconsciously because developers are simply not attending to these broader concerns. Creating accurate and effective models is important but so is ensuring that all races, ethnicities and socioeconomic levels are adequately represented in the data model (O’Neil, 2016).161enartificial intelligenceBiasAI biasdecision makingHCIHow Does Bias Affect Users of Artificial Intelligence Systems?Thesis