SoC 2009 idea
- 1 Introduction
- 2 Development style
- 3 Mentors
- 4 Application Template
- 5 Possible Projects
- 5.1 Python Bindings
- 5.2 3D extension of panotools library
- 5.3 enblend/enfuse gimp plugin
- 5.4 bracketing pano model
- 5.5 hugin RAW support
- 5.6 hugin colour balancing
- 5.7 Straight-line detection for automated lens calibration
- 5.8 Simple mask editing
- 5.9 Implementing a new projection model for the creation of mosaics in panotools
- 5.10 zenith/nadir blending for enblend-enfuse
- 5.11 seam optimization in enblend-enfuse
If you are a student willing to participate in The Google Summer of Code 2009, do as suggested below:
- find out what ideas we have for SoC projects this year (read below);
- make up your mind, if you want to pick one of those tasks or if you have your own idea;
- join our community at hugin-ptx;
- introduce yourself and tell us about your plans and wishes.
- add your proposal to the student proposal page - see examples from last year
IMPORTANT: at the time of writing it is not known yet if we will be admitted to Google Summer of Code 2009. We can not guarantee you a place in the program, but we recommend you start preparing your application early as the application process is very competitive.
Most of the projects below are related to Hugin, and some also relate to Panotools or tlalli. Hugin is mostly written in C++, and uses the VIGRA image processing library to support different types of images (for example, 8bit, 16bit and float (HDR) images). The core functionality is implemented in a platform independent C++ library, which is used by the GUI based on wxWidgets toolkit, and the command line programs (nona, fulla). We also very much welcome contributions to Enblend/Enfuse.
The development of the projects should take place in a separate branch of the projects CVS (or SVN) repository. Communication with the mentors should usually happen through the appropriate mailing list. All code should work on the major platforms supported (Linux, OSX, Windows).
The following people have volunteered to be primary mentors:
- Andrew Mihal
- Bruno Postle
- Daniel M. German
- Tom K. Sharpless
- Tim Nugent
- John Cupitt
- Sébastien Roy
The following people have volunteered to be secondary mentors:
- Pablo d'Angelo
- Jim Watters
We welcome you to propose your own ideas.
SoC2007_projects#Interactive panoramic viewer (This was completed but there is further possible work to be done, particularly a joint project with VLC to integrate the viewer in their media player)(chosen and completed in 2009 GSoC)
- SoC2007_projects#Processing of very large images (using the VIPS framework, or even GEGL if ready)
- SoC2007_projects#Architectural Overhaul of Panotools
SoC_2008_ideas#Ghost_removal_for_enfuse(done in 2009 GSoC)
- SoC_2008_ideas#Lens_Database (the library part is done, see lensfun, but there is still no system for updating the database)
SoC_2008_ideas#tCA_Correction(this is done, the tca_correct tool now exists in the hugin-0.7.0 release)
expose all functions / libraries as Python bindings
3D extension of panotools library
Update Feb 2010: this is largely already completed with the 'mosaic' mode in Hugin 2010.1
the current assumption of panotools is that all images are shot from the same point of view in a different direction. develop and implement the mathematics to adjust for a shift in the point form which the pictures where taken.
enblend/enfuse gimp plugin
Various GUIs for enblend and enfuse already exist (e.g. ImageFuser), however nothing that would as useful as a gimp plugin. e.g. gimp opens multilayer TIFF files created by hugin and other tools, an option to 'blend visible layers with enblend' would allow manual adjustment of masks during blending.
Note that there are standalone ImageFuser and ExpoBlending tools that provide GUIs for enfuse, these should give some idea of the options that need to be made available in a GIMP plugin. ExpoBlending is also a Digikam plugin. There are currently no enblend GUIs.
bracketing pano model
Update Feb 2010: this is already completed with the layout mode completed as part of GSoC 2009
Currently the Hugin pano model has no provision for brackets / stacked images. This project would modify the pano model to make provision for image stacks to be considered as a single image or to be optimized locally as a stack before being optimized globally as a panorama.
hugin RAW support
(note this proposal needs to be checked for sanity by someone who understands RAW formats)
Something that ought to work very well would be if hugin/enblend could support RAW output, specifically 'linear DNG'. In this workflow, hugin would import RAW via dcraw as 16bit linear data with no correction. It would then output linear DNG files which could be opened in any RAW converter for further tweaking.
This would be a good summer of code project: modify hugin to use dcraw as an input plugin, integrate TCA correction, modify enblend to write linear DNG (or create a standalone tiff2dng tool), modify hugin GUI to enable it, fix any ufraw bugs reading linear DNG.
hugin colour balancing
note: a grey picker has been implemented in Hugin 2011.0.0
Internally hugin uses the EMoR photometric model. This means that adjusting the colour balance in hugin by altering the red/blue channel multipliers will give better results than doing the same in an image editor such as the Gimp/Photoshop - Provided the photometric parameters for the camera are calibrated properly. There is potentially unexplored interesting stuff that can be done with this capability: grey pickers, temperature sliders, curve adjustment etc...
A related problem is that hugin has an internal 'lens' representation that it used to link lens parameters for different photos together, this capability really should be split into three models: lens grouping photos with the same lens but potentially different CCDs, sensor grouping photos with the same CCD but potentially different lenses, position stacked images that can be rotated together (see #bracketing pano model above) - This is tinkering with hugin fundamentals and needs to be overseen by Pablo. Update: these backend changes were implemented as part of the GSoC2009 layout project
Straight-line detection for automated lens calibration
Update: Note this was added as the calibrate_lens tool as part of GSoC 2009, already available in Hugin 2009.2.0, however picking this up and integrating it would be part of any 'lens database' project.
One of the techniques for lens calibration is to optimise straight lines. Tom Sharpless suggests: "Hugin needs an easy and reliable way to do straight line lens calibration. After many years using various calibration systems, photogrammetrists seem to have decided the straight line method is best. It is robust, accurate, and can often use naturally occurring straight lines rather than special calibration rigs. And it works well with fisheye lenses, which many other methods do not.
The key is software that can automatically follow strong "line-like" curves and estimate their positions to subpixel accuracy. Then, using a reasonable model of the lens, an optimizer like the one in Hugin can fit parameters that straighten the lines. Since the raw images of calibration lines are generally curved, a human may have to designate which lines are straight in reality. However, fully automatic straight line calibration is theoretically possible, based on jointly optimizing lens parameters and estimated line shapes.
A tool of this kind would be especially valuable for calibrating fisheye lenses, something the PanoTools family of s/w has always done poorly. Part of the problem is that the original PT library used the equal-angle model for fisheye lenses, instead of the equal-area model most modern fisheyes are actually designed to. Apparently libpano13 now has the equal-area model, too, but Hugin continues to use the equal-angle one. So an important part of this project could be to revise Hugin to handle fisheye lenses more correctly.
Simple mask editing
Update: Note this is complete and implemented separately from GSoC, it is available since Hugin 2010.2.0 and so can't be a SoC project
(note there was an automated masking project in 2008)
Firstly there are two mask-editing scenarios and they are almost unrelated:
- masking out objects that you don't want to appear in the scene.
- masking to put one object in-front of another.
Number 2 is never going to be done in hugin, this is a job for an image editor, feathered eraser brushes and clone tools. For this we really need some kind of multilayer output, and I would say that an enblend/enfuse Gimp plugin to merge these layers two at a time should be enough to make this work well (see above).
(Note there is a 'layered' target in the Makefile.equirect.mk 'plugin' which does this multilayered TIFF output)
The simplest way to provide number 1 is to change the crop tool into a polygon editor for masking the original images, and store the polygon coordinates in the 'i' lines of the .pto file - Masking would be performed at the remapping stage as it currently is for circular crops.
This is conceptually very simple. At a later date the coordinates can be translated back and forth between the photo and panorama spaces, then automation of the mask can build on this.
Implementing a new projection model for the creation of mosaics in panotools
Update: this was a GSoC that failed in 2009, it has however been implemented outside of SoC and will be available in Hugin 2010.2.0.
Currently panotools implements a 3d spherical model for panorama projection. This means that it assumes that the camera rotates around its no-parallax-point from one frame to another. This model is perfectly good for making spherical panoramas.
There is, however, another use of registration and stitching of photographs: mosaics. In this scenario the object to be captured is a flat plane (such as a wall, a aerial photograph, or partial scans of a larger work.
This project will require to reverse engineer and document the current projection model, and propose and implement a new one. The panotools remapping infrastructure is very flexible and should be relatively straightforward to accomplish this.
zenith/nadir blending for enblend-enfuse
Currently enblend-enfuse is not aware of the zenith/nadir seam in full spherical images. This sometimes results in star-like artifacts. Strategies have been devised to come around that limitation, such as re-projecting the concerned areas, blending them and merging them back. It would be nice if the code could do this automatically.
seam optimization in enblend-enfuse
The seam optimization algorithm of enblend-enfuse can be improved. Andrew would be interested to mentor such a project.