Abstract
Analysis of human sketches in deep learning has advanced immensely through the use of waypoint-sequences rather than raster-graphic representations. We further aim to model sketches as a sequence of low-dimensional parametric curves. To this end, we propose an inverse graphics framework capable of approximating a raster or waypoint based stroke encoded as a point-cloud with a
variable-degree Bezier curve. Building on this module, ´we present Cloud2Curve, a generative model for scalable high-resolution vector sketches that can be trained end-to-end using point-cloud data alone. As a consequence, our model is also capable of deterministic vectorization which can map novel raster or waypoint based sketches to their corresponding high-resolution scalable Bezier equivalent. ´We evaluate the generation and vectorization capabilities of our model on Quick, Draw! and K-MNIST datasets.
The analysis of free-hand sketches using deep learning
[40] has flourished over the past few years, with sketches
now being well analysed from classification [43, 42] and
retrieval [27, 12, 4] perspectives. Sketches for digital analysis have always been acquired in two primary modalities
- raster (pixel grids) and vector (line segments). Raster
sketches have mostly been the modality of choice for sketch
recognition and retrieval [43, 27]. However, generative
sketch models began to advance rapidly [16] after focusing on vector representations and generating sketches as
sequences [7, 37] of waypoints/line segments, similarly to
how humans sketch. As a happy byproduct, this paradigm
leads to clean and blur-free image generation as opposed
to direct raster-graphic generations [30]. Recent works
have studied creativity in sketch generation [16], learning to
sketch raster photo input images [36], learning efficient