Sketch compression
Date
2023-09
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
University of Surre
Abstract
In the rapidly evolving field of digital art and animation, traditional sketching techniques often rely on pixel-based methods, leading to less meaningful representations. This dissertation
aims to transform this paradigm by rigorously investigating the efficacy of autoencoders for vector sketch compression. We conducted experiments using two distinct neural network architectures: Long Short-Term Memory (LSTM) and Transformer-based autoencoders. The Transformer
model, which has significantly impacted the field of sequence-to-sequence tasks, especially in
natural language processing, serves as a focal point of our study. Our experiment aims to answer a compelling question: Can these impressive results be replicated in the domain of vector
sketch compression? The answer is a resounding yes. The Transformer model not only excelled
in reconstructing sketches but also simplified the strokes and enhanced the overall quality of the
sketch, achieving an impressive 85.03% classification accuracy. The LSTM model, known for its
ability to capture temporal dependencies, served as our baseline, achieving a classification accuracy of 56.139% on a pre-trained classifier. Our findings strongly advocate for the adoption of
Transformer-based models in vector sketch compression, offering a more compact and semantically rich representation. The LSTM model’s respectable performance also suggests its potential
utility in less complex scenarios. Overall, this study opens new avenues for research in digital art,
particularly in optimizing Transformer architectures for sketch compression.
Description
Keywords
Autoencoder, Transformer, Deep learning, sketch, LSTM, compression, neural network