Deep Reinforcement Learning and Privacy Preserving with Differential Privacy

dc.contributor.advisorZhu, Tianqing
dc.contributor.authorAbahussein, Suleiman
dc.date.accessioned2024-01-16T06:55:24Z
dc.date.available2024-01-16T06:55:24Z
dc.date.issued2023-09-15
dc.description.abstractWith the rapid advances in technology in the current era and the emergence of multiple technologies that have transformed society, Deep reinforcement learning (DRL) offers promising solutions and enhanced capabilities with demonstrated superior results. Deep reinforcement learning is a subfield of artificial intelligence that has attracted significant research attention and development over the past few years. Reinforcement learning (RL) is enabled by deep learning to address the intractable problems previously encountered, for example, how an agent learns to play video games with only pixels as input. In the field of robotics, deep reinforcement learning algorithms are employed where control policies for robots can be learned directly from camera inputs in the real world. Deep RL aims to maximize the cumulative reward through the process of trial and error to find the optimal policy. The learning method is carried out by executing the action, receiving corresponding rewards and then moving to the next state. In some complex problems, it is necessary to have more than one RL agent, which leads to the idea of multi-agent reinforcement learning, where more than one agent works together and shares the same environment to achieve a certain goal. An example of multi-agent RL is multiple robotics working to rescue an individual. Nowadays, deep reinforcement learning is used in various areas, such as recommendation systems, robotics, and health applications. While there are enormous benefits to using these technologies, there are also significant privacy concerns associated with them. The learning process in deep reinforcement learning involves performing the action, receiving the reward, and moving to the next state. Deep reinforcement learning is vulnerable to adversary attacks, and private information can be inferred by an adversary using recursive querying. The trained policy could be released to the client side, which could enable the adversary to infer private information from the trained policy, pose a real risk, and constitute a breach of privacy. This research focuses on deep reinforcement learning and multi-agent reinforcement learning and the related privacy issues. The contributions made by this research are as follows: • This research proposes a solution for online food delivery services to increase the number of food delivery orders and thereby increase the long-term income of couriers. The solution involves leveraging multi-agent reinforcement learning by employing two multi-agent reinforcement learning algorithms to guide couriers to areas with a high demand for food delivery orders. • This research proposes a solution to protect privacy in Double and Dueling Deep Q Networks by adopting the Differentially Private Stochastic Gradient Descent (DPSGD) method and injecting Gaussian noise into the gradient. • This research proposes the Protect User Location Method (PULM) to protect customer location information in online food delivery services. This method injects differential privacy Laplace noise based on two factors: the size of the city and the frequency of customer orders. • This research proposes a Protect Trajectory and Location in Food Delivery (PTLFD) method to maintain the privacy of the customer’s stored data in online food delivery services. This method leverages multi-agent reinforcement learning and differential privacy to protect customer location information.
dc.format.extent146
dc.identifier.urihttps://hdl.handle.net/20.500.14154/71201
dc.language.isoen
dc.publisherSaudi Digital Library
dc.subjectMulti-agent reinforcement learning
dc.subjectDeep reinforcement learning
dc.subjectprivacy
dc.subjectDifferential privacy
dc.titleDeep Reinforcement Learning and Privacy Preserving with Differential Privacy
dc.typeThesis
sdl.degree.departmentComputer Science
sdl.degree.disciplinePrivacy, AI
sdl.degree.grantorUniversity of Technology Sydney
sdl.degree.nameDoctor of Philosophy
sdl.thesis.sourceSACM - Australia

Files

Collections

Copyright owned by the Saudi Digital Library (SDL) © 2024