SoC2007 project Anti Ghosting

From Wiki
Revision as of 20:52, 4 May 2010 by Flixh (talk | contribs) (Undo revision 12352 by Marie101, revert link spamming)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

See SoC 2007 overview for usage hints and a page list.


High Dynamic Range (HDR) Panoramas are formed by taking numerous pictures of different exposures, tone mapping them to an exposure that provides the most detail for the human eye, and stitching them into a single image. Since there are multiple base images per area in HDR imags, moving objects, such as people, result in semi-transparency, blurring, or incomplete objects. This phenomenon is called ghosting. Some commercial products support ghosting elimination, but most of them only work with small variations in the camera. The goal of this project is to devise a robust blending algorithm to eliminate ghosting in an HDR panorama.

Deliverables and Details

Go to the blog for project progress, results, and more!


The execution of the project involves 3 phases: research, implementation, and integration.

Phase 1 - Research This phase involves gaining a deeper understanding of the current challenges of anti-ghosting, taking a look at the methods used by current softwares and research papers, and devising a plan of attack.

Phase 2 - Implementation In this phase, the core funcionality of the anti-ghosting algorithm derived from phase one. This phase should take up a majority of the project timeline, and will probably encounter the most unexpected delays. At the end of the phase, the blending algorithm should be fairly robust (works on most if not all possible use cases), and should be able to produce the result within a reasonable amount of time.

Phase 3 - Integration During this phase, the algorithm will be integrated into the current HDR algorithm so it can be used by the general populace. This phase also accounts for optimization of the code and any final debugging to tie up the loose ends.

Detailed Schedule

Phase 1 - Research (2 weeks)

  • Weeks 1-2:
    • obtain several sets of sample images to use for testing later
    • research different algorithms used in current softwares and proposed in research papers. Understand their pros and cons.
    • implement some of the techniques suggested on the papers if time allows to get a feel for any potential problems.
    • design an efficient and robust algorithm and plan out the details of the implementation and integration phases.

Phase 2 - Implementation (7 weeks)

  • Week 3
    • build test scripts for test image sets
    • implement the algorithm decided upon in phase 1 in MatLab
    • test and improve upon the algorithm
  • Weeks 3-6
    • port to a C++ library
    • build a simple application that merges input files to a single HDR image
  • Weeks 7-9 time for general testing and debugging

Phase 3 - Integration (5 weeks)

  • Weeks 10 integration into hugin
  • Weeks 11-13 testing/debugging and optimization
  • Week 14 documentation, etc.


Pablo d'Angelo

Students planning to apply

Jing Jin