SoC 2008 Masking in GUI

From PanoTools.org Wiki
(Difference between revisions)
Jump to: navigation, search
m
(Project Outline)
Line 9: Line 9:
 
*Polygon editing mode for fine-tuning boundary regions
 
*Polygon editing mode for fine-tuning boundary regions
  
The second phase is focus on 3D segmentation. For instance if an object is moving in front of the camera and we want to exclude that object from the final scene then the user has to mark that object in every image. The second phase will make this simpler by extending the segmentation into 3D.  
+
The second phase will focus on 3D segmentation. For instance if an object is moving in front of the camera and we want to exclude that object from the final scene then the user has to mark that object in every image. The second phase will make this simpler by extending the segmentation into 3D.
 
+
 
+
 
+
  
 
==Timeline==
 
==Timeline==

Revision as of 10:00, 9 May 2008

Contents

Introduction

The objective of this project is to provide the user with an easy to use interface for quickly creating blending masks. After the images are aligned and shown in the preview window, users will have the option of creating blending masks. Currently the goal is to provide option for mask creation in the preview window. Since it already shows the aligned images, it would be easier for users to create appropriate masks when all the images aligned.

Project Outline

Implementation will be done in two phases. In the first phase, the basic framework will be created. Users will be able to mark regions as either foreground or background and specify the contribution of each segment. Based on the marked region a polygonal outline of the foreground object will be created. Since the outline may not always be accurate (eg. in the case of low contrast edge), a polygon editing option will be provided (the idea is from [2]). When the user chooses to create panorama the masks are generated as output and provided to enblend.The editing features that are going to be available in this phase are -

  • Option for zooming in/out
  • Set brush stroke size
  • Polygon editing mode for fine-tuning boundary regions

The second phase will focus on 3D segmentation. For instance if an object is moving in front of the camera and we want to exclude that object from the final scene then the user has to mark that object in every image. The second phase will make this simpler by extending the segmentation into 3D.

Timeline

1. Before Start of Coding Phase:

  • Determine input data type, format and how the user will interact
  • Construct a preliminary design of the software
  • Outline of how the algorithm will work
  • Finalize the scope of the project

2. Coding Phase:

2.1 Before Mid Term Evaluation

Implement a basic framework that can –

  • take an image stack of a particular format.
  • allow users to mark regions
  • incorporate algorithm to learn the color model from the user defined area
  • start implementing 2D multi-label image segmentation

2.2 After Mid Term Evaluation

  • perform image segmentation on the stack of images (3D segmentation problem where the user will only need to roughly mark the region on a small subset of the images)

At the end of this stage the segmentation algorithm should be able to correctly identify similar region in successive images.

References

[1] Yury Boykov, Marie-Pierre Jolly, "Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images," Proc. Eighth IEEE ICCV, vol.1, no., pp.105-112 vol.1, 2001. webpage

[2] Yin Li, Jian Sun, Chi-Keung Tang and Heung-Yeung Shum, "Lazy Snapping," ACM Transaction on Graphics(Proceedings of SIGGRAPH), Vol 23, No. 3, April 2004.paper video

[3] Aseem Agarwala, Mira Dontcheva, Maneesh Agrawala, Steven Drucker, Alex Colburn, Brian Curless, David Salesin, Michael Cohen, "Interactive Digital Photomontage," ACM Transactions on Graphics (Proceedings of SIGGRAPH), 2004. webpage

Personal tools
Namespaces

Variants
Actions
Navigation
tools
Tools