V / h a t Is It? When I think of image editing, packages such as Photoshop and Paint Shop Pro come to mind. These packages are used to edit. transform or manipulate, typically with extensive user guidance, one or more images to produce a d e s i r e d result . Tasks such as se lect ion, matting, blending, warping, morphing, etc. are often tedious and t ime consuming.Visionassisted edi t ing can l ighten the burden, whether the goal is a simple cut and paste composition or a major special effect for a movie (see Doug Roble's article in this issue). Thus, this article focuses on computer vision techniques that reduce (often significantly) the time and effort involved in editing images and video. The goal of vision systems is to de tec t edges, regions, shapes, surface features, lighting proper t ies , 3D geometry, etc. Host currently available image editing tools and f i l ters ut i l ize low-level, 2D geometr ic o r image processing operations that manipulate pixels. However, vision techniques extract descriptive object or scene information, thus allowing a user to edit in terms of higher-level features. Fully automatic computer vision remains a major focus in the computer vision community. Comp le te au tomat ion is cer ta in ly preferred for such tasks as robotic navigation, image/v ideo compression, model dr iven object delineation, multiple image correspondence, image-based model ing o r anyt ime autonomous interpretation of images/video is desired. However, general purpose image editing will continue to require human guidante due to the essential role of the user in the creative process and in identifying which image components are of interest. Host vision-assisted image editing techniques fall somewhere between user-assisted vision and vision-based interaction. User-assisted vision describes those techniques where the user interacts in image (or parameter) space to begin and/or guide a vision algorithm so tha t i t produces a desired result , For example, Photoshop's magic wand computes a connected region of similar pixels based on a mouse click in the area co be selected.Visionbased interaction refers to those methods where the computer has done some or all of the "vision" part and the user interacts within the resulUn8 vision-based feature space. One example is the ICE (Interact ive Con tou r Editing) system [4] that computes an image's edge representation and then allows a user to interactively select edge groupings to extract or remove image features. A tool is classified based on where a user can "touch" the data of the underlying vision function the process that computes results from inputs. User-assisted vision manipulates the input (or domain) space of the vision func t ion wh i le v is ion-based in te rac t ion provides access to the result (or range). Some tools al low intervention at several steps in the process, including the ability to adjust partial or intermediate results. Regardless of a tool's classification, there are algorithmic properties that are desirable for image editing tools.These tools should be:
[1]
William A. Barrett,et al.
Intelligent scissors for image composition
,
1995,
SIGGRAPH.
[2]
Jitendra Malik,et al.
Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach
,
1996,
SIGGRAPH.
[3]
M.M. Covell,et al.
Dynamic occluding contours: a new external-energy term for snakes
,
1999,
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[4]
Chong-Wah Ngo,et al.
Detection of gradual transitions through temporal slice analysis
,
1999,
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[5]
Peisheng Gao,et al.
2-D shape blending: an intrinsic solution to the vertex path problem
,
1993,
SIGGRAPH.
[6]
Michael Gleicher,et al.
This document was created with FrameMaker 4.0.4 Image Snapping
,
2022
.
[7]
Sung Yong Shin,et al.
Image metamorphosis using snakes and free-form deformations
,
1995,
SIGGRAPH.
[8]
Richard Szeliski,et al.
A multi-view approach to motion and stereo
,
1999,
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[9]
William A. Barrett,et al.
Toboggan-based intelligent scissors with a four-parameter edge model
,
1999,
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).
[10]
Thomas W. Sederberg,et al.
A work minimization approach to image morphing
,
1998,
The Visual Computer.
[11]
William A. Barrett,et al.
Interactive Segmentation with Intelligent Scissors
,
1998,
Graph. Model. Image Process..
[12]
James H. Elder,et al.
Image Editing in the Contour Domain
,
2001,
IEEE Trans. Pattern Anal. Mach. Intell..
[13]
Tomaso Poggio,et al.
Image Representations for Visual Learning
,
1996,
Science.
[14]
Jitendra Malik,et al.
Recovering photometric properties of architectural scenes from photographs
,
1998,
SIGGRAPH.