DEEP LEARNING APPROACH TO LARGE-SCALE SYSTEMS
Abstract
The significance of large-scale systems has increased recently due to the growth in data and the number of users. The computational cost of analyzing these high-dimensional systems due to the curse of dimensionality raises the urge for developing efficient approaches. Deep learning methods have the capability and scalability to process high-volume data with significantly lower computational complexity. In this work, deep learning algorithms are utilized to solve large-scale systems in different applications. We design and solve high-dimensional systems using tractable algorithms. In particular, the deep reinforcement learning method and deep neural network are employed in our work in maximizing problems and classification problems, respectively. Comparisons with conventional algorithms are performed for validation purposes. Moreover, this work proposes an approach to exploiting the knowledge of the physical structure of plants inspired by deep learning algorithms.
An application in the forest management field considered in this work is a large-scale forest model for wildfire mitigation. A high-dimensional forest model is designed in the Markov decision process framework. The model includes the probability of wildfire occurrence in a large number of stands. The probability of wildfire in each stand is a function of wind direction, flammability, and the stand's timber volume. Wildfire reduction is achieved by maximizing the timber volume in the forest through management actions. A deep reinforcement learning approach, i.e., the actor-critic algorithm, is used to solve the Markov decision process and propose management policies. Furthermore, the performances of conventional Markov decision process solutions, i.e., the value iteration algorithm and the genetic algorithm, are compared to the proposed approach. It outperforms these algorithms in terms of the value of the timber volume and the computational cost.
Another interesting application considered in this thesis is fast stochastic predictive control. In the proposed approach, the computational complexity of solving stochastic predictive control is significantly reduced using deep learning. In particular, the number of constraints in the sampled method is reduced to the minimal set required to solve the optimization problem. Determining these constraints,i.e., the policies, is considered a classification problem to be solved using a neural network. The small number of constraints and the solvable quadratic optimization problem introduced by the sampled method result in a fast stochastic model predictive control.
In this thesis, we also propose an approach to exploiting the prior knowledge of the physically interconnected systems in the parameter estimation domain. Unlike the physics-informed neural network, the proposed approach can estimate the parameters for every system in the interconnection. It has a general form that can be applied to any system as well as an objective function. We also combine the case of prior knowledge of system function with the case of the unavailability of this information. The Fourier series approximation method is used when knowledge of system functions is not available. The first-order gradient descent algorithm is considered to minimize the estimation error in the objective function. For that, we provide a systematic way to compute the gradients of the objective function. Using several versions of the gradient descent algorithm, the proposed solution shows promising results in the estimation of the system parameters.
Description
Keywords
Artificial Intelligence, Deep Learning, Neural Network, Classification, Stochastic Model Predictive Control, System Identification, Parameters Estimation, Wildfire Mitigation