Shape from shading under inconsistent lighting.

Shape from shading models traditionally assume that observers estimate a lighting direction and use this estimate to infer shape from shading. In real world scenes, local lighting direction varies in unpredictable ways. How locally consistent must lighting be to perceive shape from shading? We manipulated local lighting directions across a scene and measured how this affected perception of shape from shading. In exp. 1 subjects were shown surfaces that varied in depth and judged the relative depth of the surface at two nearby probe locations. The depth profiles of the surfaces were created by filtering Gaussian white noise with a kernel of one of three widths (sigmaS). The lighting direction varied smoothly from place to place. Lighting directions were generated using Gaussian noise filtered with one of six kernels (sigmaL). There was also one uniform-lighting-direction condition. Performance decreased smoothly as sigmaL decreased (high sigmaL = less lighting variation) but even with quite rapid changes in local lighting direction, performance was still well above chance. In exp. 2 a window of uniform-lighting-direction was placed around the probe locations; outside this window the local lighting direction varied rapidly. Window size varied each trial. Results show that if the local lighting direction is consistent over more than two bumps in the surface shape then observers can recover shape from shading. In exp. 3 subjects viewed a surface in which three quadrants were lit from one direction and the lighting direction of the fourth differed by a tilt of 90°. Between quadrants, lighting direction changed smoothly from one direction to the other. The task was to identify the different quadrant. All subjects performed at chance. These results suggest that shape from shading mechanisms can tolerate rapid variations in local lighting direction, and furthermore observers cannot even detect strong lighting inconsistencies. Meeting abstract presented at VSS 2015.