song, Yi-ZheAshcroft, AlexanderAlsadoun, Hadeel Mohammed2024-07-022024-07-022023-09https://hdl.handle.net/20.500.14154/72458In the rapidly evolving field of digital art and animation, traditional sketching techniques often rely on pixel-based methods, leading to less meaningful representations. This dissertation aims to transform this paradigm by rigorously investigating the efficacy of autoencoders for vector sketch compression. We conducted experiments using two distinct neural network architectures: Long Short-Term Memory (LSTM) and Transformer-based autoencoders. The Transformer model, which has significantly impacted the field of sequence-to-sequence tasks, especially in natural language processing, serves as a focal point of our study. Our experiment aims to answer a compelling question: Can these impressive results be replicated in the domain of vector sketch compression? The answer is a resounding yes. The Transformer model not only excelled in reconstructing sketches but also simplified the strokes and enhanced the overall quality of the sketch, achieving an impressive 85.03% classification accuracy. The LSTM model, known for its ability to capture temporal dependencies, served as our baseline, achieving a classification accuracy of 56.139% on a pre-trained classifier. Our findings strongly advocate for the adoption of Transformer-based models in vector sketch compression, offering a more compact and semantically rich representation. The LSTM model’s respectable performance also suggests its potential utility in less complex scenarios. Overall, this study opens new avenues for research in digital art, particularly in optimizing Transformer architectures for sketch compression.71enAutoencoderTransformerDeep learningsketchLSTMcompressionneural networkSketch compressionThesis