SoC 2009 idea

From Wiki
(Difference between revisions)
Jump to: navigation, search
(Implementing a new projection model for the creation of mosaics in panotools)
(Python Bindings)
Line 54: Line 54:
== Python Bindings ==
== Python Bindings ==
''note: there is an experimental but functional set of python bindings ready to be integrated in Hugin 2011.2.0''
expose all functions / libraries as Python bindings
expose all functions / libraries as Python bindings

Latest revision as of 01:49, 31 March 2011


[edit] Introduction

If you are a student willing to participate in The Google Summer of Code 2009, do as suggested below:

  • find out what ideas we have for SoC projects this year (read below);
  • make up your mind, if you want to pick one of those tasks or if you have your own idea;
  • join our community at hugin-ptx;
  • introduce yourself and tell us about your plans and wishes.
  • add your proposal to the student proposal page - see examples from last year

IMPORTANT: at the time of writing it is not known yet if we will be admitted to Google Summer of Code 2009. We can not guarantee you a place in the program, but we recommend you start preparing your application early as the application process is very competitive.

[edit] Development style

Most of the projects below are related to Hugin, and some also relate to Panotools or tlalli. Hugin is mostly written in C++, and uses the VIGRA image processing library to support different types of images (for example, 8bit, 16bit and float (HDR) images). The core functionality is implemented in a platform independent C++ library, which is used by the GUI based on wxWidgets toolkit, and the command line programs (nona, fulla). We also very much welcome contributions to Enblend/Enfuse.

The development of the projects should take place in a separate branch of the projects CVS (or SVN) repository. Communication with the mentors should usually happen through the appropriate mailing list. All code should work on the major platforms supported (Linux, OSX, Windows).

[edit] Mentors

The following people have volunteered to be primary mentors:

  • Andrew Mihal
  • Bruno Postle
  • Daniel M. German
  • Tom K. Sharpless
  • Tim Nugent
  • John Cupitt
  • Sébastien Roy

The following people have volunteered to be secondary mentors:

  • Pablo d'Angelo
  • Jim Watters

[edit] Application Template

SoC2009 Application Template

[edit] Possible Projects

We welcome you to propose your own ideas.

Some of the SoC2007 projects and SoC2008 projects proposals were not done in the past years and are potential projects for this year:

[edit] Python Bindings

note: there is an experimental but functional set of python bindings ready to be integrated in Hugin 2011.2.0

expose all functions / libraries as Python bindings

[edit] 3D extension of panotools library

Update Feb 2010: this is largely already completed with the 'mosaic' mode in Hugin 2010.1

the current assumption of panotools is that all images are shot from the same point of view in a different direction. develop and implement the mathematics to adjust for a shift in the point form which the pictures where taken.

[edit] enblend/enfuse gimp plugin

Various GUIs for enblend and enfuse already exist (e.g. ImageFuser), however nothing that would as useful as a gimp plugin. e.g. gimp opens multilayer TIFF files created by hugin and other tools, an option to 'blend visible layers with enblend' would allow manual adjustment of masks during blending.

Note that there are standalone ImageFuser and ExpoBlending tools that provide GUIs for enfuse, these should give some idea of the options that need to be made available in a GIMP plugin. ExpoBlending is also a Digikam plugin. There are currently no enblend GUIs.

[edit] bracketing pano model

Update Feb 2010: this is already completed with the layout mode completed as part of GSoC 2009

Currently the Hugin pano model has no provision for brackets / stacked images. This project would modify the pano model to make provision for image stacks to be considered as a single image or to be optimized locally as a stack before being optimized globally as a panorama.

[edit] hugin RAW support

(note this proposal needs to be checked for sanity by someone who understands RAW formats)

hugin is never going to be a RAW converter with all the options of a tool such as ufraw, so I would be wary of adding half-baked RAW support to hugin that produced second-rate results.

Something that ought to work very well would be if hugin/enblend could support RAW output, specifically 'linear DNG'. In this workflow, hugin would import RAW via dcraw as 16bit linear data with no correction. It would then output linear DNG files which could be opened in any RAW converter for further tweaking.

This would be a good summer of code project: modify hugin to use dcraw as an input plugin, integrate TCA correction, modify enblend to write linear DNG (or create a standalone tiff2dng tool), modify hugin GUI to enable it, fix any ufraw bugs reading linear DNG.

[edit] hugin colour balancing

note: a grey picker has been implemented in Hugin 2011.0.0

Internally hugin uses the EMoR photometric model. This means that adjusting the colour balance in hugin by altering the red/blue channel multipliers will give better results than doing the same in an image editor such as the Gimp/Photoshop - Provided the photometric parameters for the camera are calibrated properly. There is potentially unexplored interesting stuff that can be done with this capability: grey pickers, temperature sliders, curve adjustment etc...

A related problem is that hugin has an internal 'lens' representation that it used to link lens parameters for different photos together, this capability really should be split into three models: lens grouping photos with the same lens but potentially different CCDs, sensor grouping photos with the same CCD but potentially different lenses, position stacked images that can be rotated together (see #bracketing pano model above) - This is tinkering with hugin fundamentals and needs to be overseen by Pablo. Update: these backend changes were implemented as part of the GSoC2009 layout project

[edit] Straight-line detection for automated lens calibration

Update: Note this was added as the calibrate_lens tool as part of GSoC 2009, already available in Hugin 2009.2.0, however picking this up and integrating it would be part of any 'lens database' project.

One of the techniques for lens calibration is to optimise straight lines. Tom Sharpless suggests: "Hugin needs an easy and reliable way to do straight line lens calibration. After many years using various calibration systems, photogrammetrists seem to have decided the straight line method is best. It is robust, accurate, and can often use naturally occurring straight lines rather than special calibration rigs. And it works well with fisheye lenses, which many other methods do not.

The key is software that can automatically follow strong "line-like" curves and estimate their positions to subpixel accuracy. Then, using a reasonable model of the lens, an optimizer like the one in Hugin can fit parameters that straighten the lines. Since the raw images of calibration lines are generally curved, a human may have to designate which lines are straight in reality. However, fully automatic straight line calibration is theoretically possible, based on jointly optimizing lens parameters and estimated line shapes.

A tool of this kind would be especially valuable for calibrating fisheye lenses, something the PanoTools family of s/w has always done poorly. Part of the problem is that the original PT library used the equal-angle model for fisheye lenses, instead of the equal-area model most modern fisheyes are actually designed to. Apparently libpano13 now has the equal-area model, too, but Hugin continues to use the equal-angle one. So an important part of this project could be to revise Hugin to handle fisheye lenses more correctly.

[edit] Simple mask editing

Update: Note this is complete and implemented separately from GSoC, it is available since Hugin 2010.2.0 and so can't be a SoC project

(note there was an automated masking project in 2008)

Firstly there are two mask-editing scenarios and they are almost unrelated:

  1. masking out objects that you don't want to appear in the scene.
  2. masking to put one object in-front of another.

Number 2 is never going to be done in hugin, this is a job for an image editor, feathered eraser brushes and clone tools. For this we really need some kind of multilayer output, and I would say that an enblend/enfuse Gimp plugin to merge these layers two at a time should be enough to make this work well (see above).

(Note there is a 'layered' target in the 'plugin' which does this multilayered TIFF output)

The simplest way to provide number 1 is to change the crop tool into a polygon editor for masking the original images, and store the polygon coordinates in the 'i' lines of the .pto file - Masking would be performed at the remapping stage as it currently is for circular crops.

This is conceptually very simple. At a later date the coordinates can be translated back and forth between the photo and panorama spaces, then automation of the mask can build on this.

[edit] Implementing a new projection model for the creation of mosaics in panotools

Update: this was a GSoC that failed in 2009, it has however been implemented outside of SoC and has been available in Hugin since 2010.2.0.

Currently panotools implements a 3d spherical model for panorama projection. This means that it assumes that the camera rotates around its no-parallax-point from one frame to another. This model is perfectly good for making spherical panoramas.

There is, however, another use of registration and stitching of photographs: mosaics. In this scenario the object to be captured is a flat plane (such as a wall, a aerial photograph, or partial scans of a larger work.

This project will require to reverse engineer and document the current projection model, and propose and implement a new one. The panotools remapping infrastructure is very flexible and should be relatively straightforward to accomplish this.

[edit] zenith/nadir blending for enblend-enfuse

Currently enblend-enfuse is not aware of the zenith/nadir seam in full spherical images. This sometimes results in star-like artifacts. Strategies have been devised to come around that limitation, such as re-projecting the concerned areas, blending them and merging them back. It would be nice if the code could do this automatically.

[edit] seam optimization in enblend-enfuse

The seam optimization algorithm of enblend-enfuse can be improved. Andrew would be interested to mentor such a project.

Personal tools