Mastering Sketching

Edgar Simo-Serra*, Satoshi Iizuka*, Hiroshi Ishikawa   (*equal contribution)

Mastering Sketching

© Eisaku Kubonouchi (@EISAKUSAKU)

We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is a real training data or the output of the simplification network, which in turn tries to fool it. This approach has two major advantages: first, because the discriminator network learns the structure in line drawings, it encourages the output sketches of the simplification network to be more similar in appearance to the training sketches. Second, we can also train the networks with additional unsupervised data: by adding rough sketches and line drawings that are not corresponding to each other, we can improve the quality of the sketch simplification. Thanks to a difference in the architecture, our approach has advantages over similar adversarial training approaches in stability of training and the aforementioned ability to utilize unsupervised training data. We show how our framework can be used to train models that significantly outperform the state of the art in the sketch simplification task, despite using the same architecture for inference. We additionally present an approach to optimize for a single image, which improves accuracy at the cost of additional computation time. Finally, we show that, using the same framework, it is possible to train the network to perform the inverse problem, i.e., convert simple line sketches into pencil drawings, which is not possible using the standard mean squared error loss.

Approach

Our training approach

Our approach is based on using Convolutional Neural Networks to process rough sketches into clean sketch simplifications. We build upon our previous sketch simplification work and augment it by jointly training with supervised and unsupervised data by employing an auxiliary discriminator network. The discriminator network is trained to distinguish between real sketches and those generated by our sketch simplification network, while the sketch simplification network is trained to fool the discriminator network.

We train the model with data from three sources: fully supervised rough sketch-line drawing pairs, unsupervised rough sketches, and unsupervised line drawings. For the fully supervised data we employ both a supervised loss and the discriminator, while for the unsupervised rough sketches we employ only the discriminator to train the sketch simplification model. The discriminator is trained using all the line drawings as positives and the output of the sketch simplification model on all the rough sketches as negatives.

Results

Sketch Simplification Results

We evaluate our approach on challenging real scanned rough sketches as shown above. The first and third column sketches are copyrighted by Eisaku Kubonouchi (@EISAKUSAKU) and only non-commercial research usage is allowed, while the image in the second column is copyrighted by David Revoy (www.davidrevoy.com) under CC-by 4.0.

Pencil Drawing Generation

Pencil Drawing Generation Results

Our approach is not limited to sketch simplification and can be used for a variety of complicated tasks. We illustrate this by training our model to generate pencil drawings from clean sketch images. Examples of two different models trained on data from different artists is shown above.

Single Image Optimization

Single Image Optimization Results

As our approach allows for the usage of unsupervised training data, it is possible to not only train models in an inductive fashion, i.e., train the models to produce good sketch simplifications for unseen data, but in a transductive fashion, i.e., optimizing the model to produce good results on the testing data. We note that this does not require ground truth annotations. Results are shown above in which initial results with the model are poor, but after optimizing it to output more realistic sketch drawings, the results become significantly better. Images copyrighted by David Revoy (www.davidrevoy.com) under CC-by 4.0.

For more details and results, please consult the full paper.

This work was partially supported by JST CREST Grant Number JPMJCR14D1, and JST ACT-I Grant Numbers JPMJPR16UD and JPMJPR16U3.

Publications

2018

  • Mastering Sketching: Adversarial Augmentation for Structured Prediction
    • Mastering Sketching: Adversarial Augmentation for Structured Prediction
    • Edgar Simo-Serra*, Satoshi Iizuka*, Hiroshi Ishikawa (* equal contribution)
    • ACM Transactions on Graphics (Presented at SIGGRAPH), 2018

Source Code

  • Sketch Simplification Network
    • Sketch Simplification Network, 1.0 (Dec, 2017)
    • Sketch Simplification Convolutional Neural Network
    • Edgar Simo-Serra and Satoshi Iizuka