Estimation of 3D surface shape from line drawings: a Bayesian model.

Human observers can interpret line drawings as well-defined 3D surfaces, despite the absence of local cues to depth such as shading and texture. In our previous work (VSS2013, VSS2014), we found that human depth interpretation in the interior of line drawings is probabilistic, in that the degree of certainty in judgments of relative depth varies in complex ways based on local and global aspects of contour geometry. Here we propose a principled Bayesian framework in which to understand human visual estimates of relative depth in line drawings. We assume a generative model in which 3D shapes are "inflated" from stochastic skeletons with circular cross-sections and then orthographically projected to create line drawings; this model defines a likelihood function for surface normals. The model then estimates the posterior distribution of surface normals at each point on a given line drawing. These posteriors are integrated to yield the probability that any given point on the surface lies closer to the viewer than any other point. This theoretical probability is combined with human biases for depth perception: (1) a fronto-parallel bias leading to an underestimation of slant, and (2) a lower-region bias, so that lower points in the image tend to be interpreted as closer. We compared the probability derived from this model to subjects' judgments of relative surface depth at two probe points from the experiment, and found good agreement between model and data. In addition, we examined the "receptive field" size of the model to quantify the degree of locality of the cues that influenced subjects' judgments. The model establishes a theoretical framework in which surface depth in line drawings can be understood as probabilistic inference based on contour structure, and in which the relative contribution of local and global aspects of contour geometry can be quantified. Meeting abstract presented at VSS 2015.