Sketch Simplification

Edgar Simo-Serra*, Satoshi Iizuka*, Kazuma Sasaki, Hiroshi Ishikawa   (*equal contribution)

Sketch Simplification

We present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.


Sketch Simplification Model

Our model is based on a fully convolutional neural network. We input the model a rough sketch image and obtain as an output a clean simplified sketch. This is done by processing the image with convolutional layers, which can be seen as banks of filters that are run on the input. While the input is a grayscale image, our model internally uses a much larger representation. We build the model upon three different types of convolutions: down-convolution, halves the resolution by using a stride of two; flat-convolutional, processes the image without changing the resolution; and up-convolution, doubles the resolution by using a stride of one half. This allows our model to initially compress the image into a smaller representation, process the small image, and finally expand it into the simplified clean output image that can easily be vectorized.


Sketch Simplification Results

We evaluate extensively on complicated real scanned sketches and show that our approach is able to significantly outperform the state of the art. We corroborate results with a user test in which we see that our model significantly outperforms vectorization approaches. Images (a), (b), and (d) are part of our test set, while images (c) and (e) were taken from Flickr. Image (c) courtesy of Anna Anjos and image (e) courtesy of Yama Q under creative commons licensing.


Comparison with commercial tools

We perform a user study and compare against vectorization tools that work directly on raster images. In particular we consider the open-source Potrace and the commercial Adobe Live Trace. Users prefer our approach over 97% of the time with respect to either of the two tools.

For more details and results, please consult the full paper.

This research was partially funded by JST CREST.



  • Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup
    • Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup
    • Edgar Simo-Serra*, Satoshi Iizuka*, Kazuma Sasaki, Hiroshi Ishikawa (* equal contribution)
    • ACM Transactions on Graphics (SIGGRAPH), 2016