Real-Time Object-Space Edge Detection using OpenCL
暂无分享,去创建一个
At its most basic, object-space edge detection iterates through all polygonal edges in each mesh to find those edges that satisfy one or more edge tests. Those that do are expanded and rendered, while the remainder are ignored. These 3D edges, and their resulting accuracy and customizability, set objectspace methods apart from all other categories of edge detection. The speed and memory limitations of iterating through all polygonal edges in each mesh each frame has inspired optimization research. In this paper, we explore methods to calculate object-space edges utilizing programmable GPU technologies, including OpenCL. The OpenCL methods explored allow for a significant reduction in calculation quantity. Some also provide a reduction in rendering artifacts and memory usage over previous GPU techniques. Unfortunately, most uses of OpenCL for edge detection results in slower performance than shader-based techniques, though variations and optimizations may reduce this disadvantage in the future. Background Edge detection and rendering is an important non-photorealistic rendering technique. Rendered edges can be used for a multitude of purposes including object differentiation, structural enhancement, highlighting, and blur-based antialiasing. They also are a required component of several graphical styles such as toon rendering and sketchy rendering. Edge Types There are several types of edges. Each type requires different detection tests, and may not be detectable with a given method. • Contour: A polygon edge that connects two polygons, one front-facing and the other back-facing. • Crease: A polygon edge that connects two polygons that are within some user-defined angular distance of each other. • Boundary: A polygon edge that forms the side for only one polygon. • Intersection: A collision of two polygons such that the line formed along the intersection is a polygon edge of only one or neither of the colliding polygons. • Marked: A polygon edge flagged to always be rendered as an edge. For the purposes of this paper, we ignore further discussion of intersection edges. Intersection edges are not detectable with object-space methods. Edge Detection Method Categories Object-space edge detection refers to one category of edge detection. The other categories are Hardware, Image-space, and Miscellaneous. • Hardware: In these methods, edges are not detected, but directly rendered as a byproduct of some other operation, or series of operations. These methods are very fast, available on almost all hardware, and do not require mesh preprocessing or additional memory. However, they typically only render contours and they lack customizability. • Image-space: In this method, two rendering passes are used. In the first, scene data, like depth and normal information, is rendered and stored to the GPU temporarily. In the second pass, the stored data is used with imageprocessing techniques to determine areas of rapid change, which generally correspond to edges. This method has constant-speed edge detection, because the speed of the second pass is dependent on the number of pixels in the viewport, not the number of objects rendered. It can also detect edges easily that other methods cannot. However, the edge accuracy is lacking and the edge thickness is inconsistent. The image-space, like the hardware methods, does not produce edges that are readily customizable. • Object-space: In these methods, some or all of the polygon edges of each mesh are tested for their drawability. Those edges that pass the test are then expanded into quads or other structures, where they are rendered like any other geometry. Those polygon edges not drawable are discarded. Since the edge detecting occurs in 3D space, the results are more accurate. The output is 3D geometry, so the edges can be customized to a great degree, including texturing, animating, and shading. However, the number of tests and rendering operations for each edge causes object-space edge detection to be the slowest form of edge detection. • Miscellaneous: These methods share no direct similarities with each other, though in terms of their positive and negative qualities, they resemble hardware methods. In this paper, we only focus on object-space edge detection, as it is the method used by this paper’s OpenCL-based edge detection. OpenCL OpenCL, which stands for Open Computing Language, creates a standardized interface for computational tasks on a variety of devices. It is specifically focused on data-parallel tasks, which require a single set of relatively simple operations be performed on massive quantities of nearly atomic data. This is essentially the form of calculation found in GPU rasterization. Not surprisingly, GPUs are often the preferred device for utilizing OpenCL. Like vertex shaders, an OpenCL program (called a kernel) can access data stored on the GPU and perform operations on it before creating output. However, OpenCL is more free in what it can do and what it can access. Though more versatile, without careful planning an OpenCL operation can take far longer than the equivalent operation implemented as a shader. Previous Work Edge detection algorithms are very old and well documented. However, the paper by Markosian et al. [Markosian et al. 97] represents some of the first work in real-time edge detection. They stored edge adjacency information for each polygon edge. Contour edges tend to form loops around a mesh, so when a contour edge was found, they recursively performed the contour edge test on the connecting edges first. This method tends to detect the longest, and therefore most significant, edges with a minimal number of random tests. Additionally, a small portion of contour edges detected each frame were also stored for the next frame as starting points for the next round of edge tests. Without sudden movements, a significant number of contour edges remain the same from frame to frame. By checking only a very small percentage of all edges, they found a fivefold increase in the rendering speed over testing all polygon edges individually. Of course, some contour edges could be missed entirely from frame to frame, possibly resulting in flickering. Gooch et al. [Gooch et al. 99] described a method where they stored edgesʼ normal arc on a sphere surrounding the object. Groups of similar arcs, in gauss map format, were stored hierarchically so that groups of edges could quickly be deemed all back-facing or all front-facing. A plane was placed at the origin of the sphere and then aligned perpendicular with the view vector. Edges whose arc intersect the plane are contour edges. This technique allowed contour edge detection to be sped up by 1.3 times for their S. Crank mesh and 5.1 times for a sphere. Unfortunately, this technique only works well under orthographic projection. In a similar idea, Sander et al. [Sander et al. 00] created a hierarchical search tree of polygons. At each node, they created anchored cones that represented the maximum range of the normals possessed by vertices in the node. This information can be used to quickly determine that no contour edges are possible for whole sets of nodes without testing individual edges. Jeff Lander [Lander 01] documented the optimization of ignoring edges that have co-planar adjacent polygons. Flat planes only generate drawable contour edges on their outside edges, not their internal edges, and they lack the angular difference between adjacent polygons to generate crease edges. During the preprocessing step, an additional test checks for co-planar adjacent polygons. If found, the edge is not added to the list of edges to test. If a mesh was constructed using primarily quads, this optimization can reduce the number of edge tests significantly: over 20% in the case of the Utah Teapot. Other imaginative edge storage and detection methods exist. Aaron Hertzmann and Dennis Zorin [Hertzmann and Zorin 00] described a method of using 4D dual surfaces to determine the contour edges with curve-plane intersections. Tom Hall [Hall 03] created a modification of Markosian et al.ʼs technique by focusing almost exclusively on tracking contour changes from frame to frame. By looking at adjacent edges to previously found edges and noticing the
[1] John F. Hughes,et al. Hardware-determined feature edges , 2004, NPAR '04.
[2] Lee Markosian,et al. Real-time nonphotorealistic rendering , 1997, SIGGRAPH.
[3] Aaron Hertzmann,et al. Illustrating smooth surfaces , 2000, SIGGRAPH.
[4] Peter-Pike J. Sloan,et al. Interactive technical illustration , 1999, SI3D.