Difference between revisions of "SoC 2008 ideas"

From PanoTools.org Wiki
Jump to: navigation, search
(tCA Correction)
(Extend hugin's output options for stitching)
 
(24 intermediate revisions by 6 users not shown)
Line 7: Line 7:
 
* join our community at [http://groups.google.com/group/hugin-ptx/ hugin-ptx];
 
* join our community at [http://groups.google.com/group/hugin-ptx/ hugin-ptx];
 
* introduce yourself and tell us about your plans and wishes.
 
* introduce yourself and tell us about your plans and wishes.
 +
* add your proposal to the [[SoC_2008_student_proposals | student proposal page]]
  
 
= Development style =
 
= Development style =
Line 16: Line 17:
 
= Possible Projects =
 
= Possible Projects =
  
Some of the [[SoC2007 projects]] proposals were not done last year and are potential projects for this year.
+
Some of the [[SoC2007 projects]] proposals were not done last year and are potential projects for this year:
 +
 
 +
* <del>[[SoC2007_projects#Automatic feature matching for panoramic images]]</del> Completed
 +
* <del>[[SoC2007_projects#Interactive panoramic viewer]] (This was completed but there is further possible work to be done)</del> Completed in GSoC 2009
 +
* [[SoC2007_projects#Processing of very large images]] (using the VIPS framework, or even GEGL if ready)
 +
* [[SoC2007_projects#Architectural Overhaul of Panotools]]
  
 
== munin — interactive openGL based GUI ==
 
== munin — interactive openGL based GUI ==
 +
 +
''note 2011: the opengl preview has since been implemented in Hugin.  Hugin now has an experimental python interface, so this project is relevant again''
 +
 +
(Note that [http://munin.projects.linpro.no/ munin] is a popular open source graphing tool for network administrators, so this name isn't really usable)
  
 
'''Possible Mentors''':
 
'''Possible Mentors''':
  
* Pablo?
+
* Pablo
  
 
'''Description''':
 
'''Description''':
  
Intuitive and interactive GUI, with priority in usability over available features and flexibility, based on what users should see — not on what software does internally.
+
Intuitive and interactive GUI, with priority in usability over available features and flexibility, based on what users should see — not on what software does internally. To there are two roads for this:
 +
 
 +
1. Create bindings to the core hugin library to a high level scripting language such as python, ruby or maybe lua. Use these to write the GUI.
 +
2. Use the core libary directly and write the GUI in C++. While this might be easier from the start, a GUI written in a scripting
 +
language has major benefits for the long term (rapid development, extensibility etc.).
 +
 
 +
It would be nice to have a preview/renderer in OpenGL, but I fear that a new GUI and OpenGL acceleration is too big for a single GSOC project. If you are an experienced developer (know and have used all the required technology on non-trivial projects) it might be possible though.
  
 
'''Skills''':
 
'''Skills''':
  
 
* C++
 
* C++
 +
* Scripting language (python, ruby, lua?)
 
* Qt4
 
* Qt4
 
* OpenGL
 
* OpenGL
Line 38: Line 55:
 
how about relaxing the skills, and instead of fixing on Qt4 work on a combination of high level scripting language and widgets? Phythons + wxWidgets or Lua + wxWidgets come to my mind. Right now we have the option as both wxWidgets and Qt are in trunk, and if the student codes those widgets that require speed in OpenGL so that they integrate into such an API, any scripter can then leverage them for the fast and efficient design of GUI. [[User:Yuval|Yuval]] 12:50, 4 March 2008 (CET)
 
how about relaxing the skills, and instead of fixing on Qt4 work on a combination of high level scripting language and widgets? Phythons + wxWidgets or Lua + wxWidgets come to my mind. Right now we have the option as both wxWidgets and Qt are in trunk, and if the student codes those widgets that require speed in OpenGL so that they integrate into such an API, any scripter can then leverage them for the fast and efficient design of GUI. [[User:Yuval|Yuval]] 12:50, 4 March 2008 (CET)
  
 +
I think the GUI + OpenGL is a bit too much for a single project. A separate project for OpenGL acceleration would make more sense, I fear. I'd really like to see a scripting language interface to the hugin core library.
 +
<small>--[[User:Pablo|pablo]] 08:47, 19 March 2008 (CET)</small>
  
 
== enblend-enfuse zenith/nadir and more ==
 
== enblend-enfuse zenith/nadir and more ==
Line 61: Line 80:
  
 
== OpenGL accelerated hugin preview ==
 
== OpenGL accelerated hugin preview ==
 +
 +
''Update: This was completed as part of GSoC 2008 'fast preview' and has been available since Hugin 0.8.0''
 +
 +
'''Primary mentor'''
 +
 +
'''Discussion'''
  
 
There is a nice proof of concept for this idea created as a [http://vision.eng.shu.ac.uk/mmvlwiki/index.php/Panorama_Viewer student project at Hallam University].
 
There is a nice proof of concept for this idea created as a [http://vision.eng.shu.ac.uk/mmvlwiki/index.php/Panorama_Viewer student project at Hallam University].
 +
 +
'''Skills'''
  
 
== Ghost removal for enfuse ==
 
== Ghost removal for enfuse ==
 +
 +
''Update: This was completed as part of GSoC 2009 and will be available in Hugin 2010.0''
 +
 +
'''Primary mentor'''
 +
 +
'''Discussion'''
  
 
Thanks to [[SoC2007 project Anti Ghosting]], hugin has ghost removal when merging [[Bracketing|bracketed]] series to [[HDR]], but this isn't currently available for [[enfuse]].
 
Thanks to [[SoC2007 project Anti Ghosting]], hugin has ghost removal when merging [[Bracketing|bracketed]] series to [[HDR]], but this isn't currently available for [[enfuse]].
 +
 +
'''Skills'''
  
 
== Masking in GUI ==
 
== Masking in GUI ==
 +
 +
''Update: This has been completed independently of GSoC and will be available with Hugin 2010.2.0''
 +
 +
'''Primary mentor'''
 +
 +
'''Discussion'''
  
 
Currently masking to remove unwanted objects from panoramas has to be added to alpha channels which is laborious, this could be integrated into the hugin GUI via a simple vector/polygon editor.  It would need to be simpler to use than the [http://hugin.sourceforge.net/tutorials/enblend-svg/ equivalent process with Inkscape]
 
Currently masking to remove unwanted objects from panoramas has to be added to alpha channels which is laborious, this could be integrated into the hugin GUI via a simple vector/polygon editor.  It would need to be simpler to use than the [http://hugin.sourceforge.net/tutorials/enblend-svg/ equivalent process with Inkscape]
 +
 +
'''Skills'''
  
 
== Batch processing ==
 
== Batch processing ==
 +
 +
''Update: This was completed in GSoC 2008 and has been available since Hugin 0.8.0. However there is still no GUI system for batch processing the creation of Hugin projects''
 +
 +
'''Primary mentor'''
 +
 +
'''Discussion'''
  
 
[[SoC2007 project Batch Processing]] was an unadopted project last year.
 
[[SoC2007 project Batch Processing]] was an unadopted project last year.
  
 
Now that hugin has a Makefile based stitching process, it is much clearer what a batch stitcher would look like. This would need to support 'plug-in' Makefile add-ons for additional views, QTVR generation, uploading, sharpening etc... as well as a queue/spool manager.
 
Now that hugin has a Makefile based stitching process, it is much clearer what a batch stitcher would look like. This would need to support 'plug-in' Makefile add-ons for additional views, QTVR generation, uploading, sharpening etc... as well as a queue/spool manager.
 +
 +
Some explanation of this existing 'Makefile process': Currently hugin stitches projects by writing out the instructions (remapping and blending with nona, enblend, enfuse, exiftool etc...) into a standard Makefile and then running 'make -f project.pto.mk all clean' - This happens all in one go when a user clicks 'save and stitch', but alternatively the user has a choice to run 'make' manually.
 +
 +
So batch processing already exists for users familiar with the command-line, ie. we can run 'make -j 16' to run processes in parallel or use a tool such as [http://distmake.sourceforge.net/pmwiki/pmwiki.php distmake] to queue jobs over the network on multiple machines.
 +
 +
A plugin system also already exists to a certain extent in that these Makefiles can be extended with other makefiles that add additional targets, one for creating QTVR output is described [http://thread.gmane.org/gmane.comp.misc.ptx/8754 here].
 +
 +
A batch stitcher would need to enable some or all of this functionality for normal users who have an interest in photography rather than tinkering with script files.
 +
 +
'''Skills'''
  
 
== Lens Database ==
 
== Lens Database ==
  
lens database of accumulated knowledge on distortion, CA, response curve.
+
''note 2011: the lens database now exists as [[lensfun]] which is mature and in use by ufraw and rawstudio, Hugin still hasn't implemented lensfun support and could potentially be used to collect and submit 'good' lens parameters.  note also calibrate_lens was created in GSoC 2009 which can also be used in this project''
The lens database aim : using stitching to feed an huge camera / lens database
+
 
for distortion correction, vignetting correction, ca, etc.
+
'''Primary mentor'''
Then, it will be possible to reuse this database to correct single picture too.
+
 
 +
Alexandre Jenny
 +
 
 +
'''Discussion'''
 +
 
 +
Lens database of accumulated knowledge on distortion, CA, response curve.
 +
The lens database aim : using stitching to feed an huge camera / lens database for distortion correction, vignetting correction, ca, etc. Then, it will be possible to reuse this database to correct single pictures too.
 +
 
 +
There are two ways to generate such a database:
 +
 
 +
# Collect high quality photographs centrally, manually calibrate them and assemble a database that matches parameters with image EXIF data - This is how the [[PTLens]] database was collected.
 +
# Use the fact that stitching software (eg hugin, but there are other tools using the same lens correction model) effectively calibrates lenses every time a project is stitched - This information could be collected, validated and averaged centrally to automate the creation of a lens database.
 +
 
 +
This project would investigate the second technique. It can be divided into several subparts which are independent.
 +
First part theory :
 +
* Distortion parameters accumulation : what measure is important, what can be such a measure reproduced, what criterion should be use in a stitch to be sure that the optimized parameters are good for the database.
 +
* CA : find a reproducible way to measure it. Study the variation of the CA with zoom parameters ( varying with aperture ? with focal ? )
 +
* response curve : this parameter is less dependent on the lens, it's more sensor / maker related.
 +
 
 +
Second part : statistic accumulation
 +
* design a standard format to store every parameter,
 +
* create and manage a central repository of lens database.
 +
 
 +
'''Skills'''
 +
 
 +
General : in computer vision field
 +
Coding : C / C++ / stl
 +
Math : optimizing technics
  
 
== tCA Correction ==
 
== tCA Correction ==
  
An utility to determining the Transverse Chromatic Aberration (tCA) of an image so it can be corrected.
+
''Update: tca_correct has been implemented independently of GSoC and has been available since Hugin 0.7.0, however there is no GUI.  note that tca_correct is slow and speeding it up and integrating it with the GUI and stitching with nona would be very useful''
 +
 
 +
'''Primary Mentor'''
  
 
'''Description'''
 
'''Description'''
  
CA is when the color channels in an image do not appear to line up well with each other. tCA is when colors are in focus but placed adjacent to each other.  tCA can be corrected with Panorama Tool's Radial Shift.
+
An utility to determining the Transverse Chromatic Aberration (tCA) of an image so it can be corrected.
More info on [[Chromatic aberration]]
+
 
 +
CA is when the color channels in an image do not appear to line up well with each other. tCA is when colors are in focus but placed adjacent to each other.  tCA can be corrected with Panorama Tool's Radial Shift. More info on [[Chromatic aberration]]
  
 
Correcting tCA using PanoTools' Radial Shift, requires knowing the radial correction coefficients a, b and c for your lens.  PTOptimizer can be used to calculate the coefficients if known amounts of shift for each channel is known from the center to the corners of the image.
 
Correcting tCA using PanoTools' Radial Shift, requires knowing the radial correction coefficients a, b and c for your lens.  PTOptimizer can be used to calculate the coefficients if known amounts of shift for each channel is known from the center to the corners of the image.
Line 98: Line 187:
 
This utility will produce these coefficients to be used in the lens database.
 
This utility will produce these coefficients to be used in the lens database.
  
Assume image is 180 deg diagonal filed of view. Regardless if it is rectangular or fisheye of any field of view.
+
Assume image is 180 deg diagonal filed of view. Regardless if it is rectangular or fisheye of any field of view. Map image to the nadir of a sphere, taking into account any offset from center of image.
Map image to the nadir of a sphere, taking into account any offset from center of image.
+
  
The bottom of the remapped image is the center of the original.
+
The bottom of the remapped image is the center of the original. Moving up the remapped image is the same as moving away from the center of the original image.
Moving up the remapped image is the same as moving away from the center of the original image.
+
  
Now we no longer have to deal with radial shift but vertical shift.
+
Now we no longer have to deal with radial shift but vertical shift. The image can be divided into a number of strips going across the entire width of the image.  Each strip represents a circle a different distance from the center.  The strips are shifted up and down the same amount over the entire image width. Each strip can have the channel data adjusted up and down to find the best match to eliminate CAt at that position.  One channel does not move while the other two are adjusted.  The best result of the two channels are recored for every strip.  Sub pixel accuracy can be achieved by increasing the height of the result when creating the remapped image.
The image can be divided into a number of strips going across the entire width of the image.  Each strip represents a circle a different distance from the center.  The strips are shifted up and down the same amount over the entire image width. Each strip can have the channel data adjusted up and down to find the best match to eliminate CAt at that position.  One channel does not move while the other two are adjusted.  The best result of the two channels are recored for every strip.  Sub pixel accuracy can be achieved by increasing the height of the result when creating the remapped image.
+
  
 
This can be done manually in real-time giving visual feedback of the channels alignment.  Or it can be done programmability doing a difference between the two channels and looking at the histogram.  The blackest result has the best alignment.  If accurate automated results are possible then there is no need for the manual method.
 
This can be done manually in real-time giving visual feedback of the channels alignment.  Or it can be done programmability doing a difference between the two channels and looking at the histogram.  The blackest result has the best alignment.  If accurate automated results are possible then there is no need for the manual method.
  
Current methods of correcting [http://hugin.sourceforge.net/tutorials/tca/en.shtml tCA] are fairly complicated.
+
Current methods of correcting [http://hugin.sourceforge.net/tutorials/tca/en.shtml tCA] are fairly complicated. Though it can be [http://thread.gmane.org/gmane.comp.misc.ptx/8183/focus=8236 completely automated] using a patched version of [[align_image_stack]] from the [[hugin]] project, this technique works very well but could use some optimisation work to make it faster (and the align_image_stack patch needs integrating).
Though they can be [http://thread.gmane.org/gmane.comp.misc.ptx/8183/focus=8236 completely automated] using a patched version of [[align_image_stack]] from the [[hugin]] project, this technique works very well but could use some optimisation work to make it faster.
+
 
 +
Also according to [http://lists.freedesktop.org/archives/create/2008-February/001093.html this post] by the author of ufraw, the tCA correction in the latest [[dcraw]] is done before bayer interpolation (note that my interpretation of this code was the opposite , so what do I know - [[User:Bruno|Bruno]]).  This tool should also create parameters for use by dcraw.
 +
 
 +
== Extend hugin's output options for stitching ==
 +
 
 +
''update: tiffcp can be used for assembling multi-layer TIFF files, however there would be some value to adding native PSD/PSB support to vigra which is the library used by both Hugin and enblend for image file IO''
 +
 
 +
'''Primary Mentor'''
 +
 
 +
Zoran Mesec
 +
 
 +
'''Description'''
 +
 
 +
Currently hugin offers jpg, png, tiff and layered tiff output. Suggested are other formats like layered psd; swf , java and html generation for web publishing. Despite of all software projects efforts, stitched panoramas have some errors (like ghost objects) that can be easily removed in Photoshop or Gimp with layered output. A lot of open source graphical software already offers proposed file formats, but despite it is not an easy task that can be implemented in a week and therefore it is suitable to be a standalone SoC project
 +
 
 +
Note that this is potentially complimentary and orthogonal to the ''batch stitching'' project described above, Makefiles are a good way to provide additional output types and quite suitable for HTML generation or even uploading.
 +
 
 +
hugin and other tools generate [[Cropped TIFF]] files, with each 'layer' as a separate file.  The workflow described above for using this data in image editors really requires file formats that contain all layers in a single file such as ''multilayer TIFF'' or [[PSD]].  Command-line tools for assembling these files are currently either too buggy to use or non-existent and fixing this situation would be  a good project in itself.
 +
 
 +
'''Skills'''
 +
 
 +
== Utility for creating a Philosphere ==
 +
 
 +
''update: note that during the proposal stage this project was discarded as being too small, also mathmap is a better tool for implementing this kind of thing''
 +
 
 +
'''Primary Mentor'''
 +
 
 +
Zoran Mesec
 +
 
 +
'''Description'''
 +
 
 +
Goal: A utility integrated into hugin for creating images from panoramas, suitable for printing.
 +
 
 +
Tasks:
 +
* Implementation of described process using modified PT scripts and them combining partial results to final image
 +
* Integration into hugin
 +
 
 +
Discussion:
 +
There are several possible outputs that can be used to print a panorama. Examples: polyhedron, "orange slices", rhombicuboctahedron...
 +
 
 +
Resources:
 +
* http://www.philohome.com/rhombicuboctahedron/rhombicuboctahedron.htm
 +
* [[PTStitcher]]
 +
* [http://www.flickr.com/photos/sbprzd/tags/foldable/ experiments with mathmap] by [[User:Seb Przd]] - Particularly note the 'conformal cube'.
 +
 
 +
'''Skills'''
 +
 
 +
Required knowledge or interest in:
 +
* panoramic imaging
 +
* C++ and scripting languages development skills.
 +
 
 +
== Sky identification ==
 +
 
 +
''Update: this was completed as part of GSoC 2008 and has been available since Hugin 0.8.0''
 +
 
 +
'''Primary Mentor'''
 +
 
 +
'''Description'''
 +
 
 +
Feature matching tools have a tendency to identify potentially useful features in clouds, this is not surprising given the amount of useful contrast and detail in a typical cloud.
 +
 
 +
This tendency is problematic when matching images for panorama stitching, as clouds can move significant distances even between photos taken seconds apart.
  
Also according to [http://lists.freedesktop.org/archives/create/2008-February/001093.html this post] by the author of ufraw, the tCA correction in the latest [[dcraw]] is done before bayer interpolation.  So this tool should also create parameters for use by dcraw.
+
This project would find a suitable method of identifying areas of sky based on a search of existing literature, and implement it as a standalone tool capable of pruning [[Control points]] from existing panotools/hugin format script files.
  
 
[[Category:Community:Project]]
 
[[Category:Community:Project]]

Latest revision as of 23:37, 30 March 2011

Introduction

If you are a student willing to participate in The Google Summer of Code 2008, do as suggested below:

  • find out what ideas we have for SoC projects this year (read below);
  • make up your mind, if you want to pick one of those tasks or if you have your own idea;
  • join our community at hugin-ptx;
  • introduce yourself and tell us about your plans and wishes.
  • add your proposal to the student proposal page

Development style

Most of the projects below are related to Hugin, and some also relate to Panotools or tlalli. Hugin is mostly written in C++, and uses the VIGRA image processing library to support different types of images (for example, 8bit, 16bit and float (HDR) images). The core functionality is implemented in a platform independent C++ library, which is used by the GUI based on wxWidgets toolkit, and the command line programs (nona, fulla). We also very much welcome contributions to Enblend/Enfuse.

The development of the projects should take place in a separate branch of the projects CVS (or SVN) repository. Communication with the mentors should usually happen through the appropriate development mailing list. All code should work on the major platforms supported (Linux, OSX, Windows).

Possible Projects

Some of the SoC2007 projects proposals were not done last year and are potential projects for this year:

munin — interactive openGL based GUI

note 2011: the opengl preview has since been implemented in Hugin. Hugin now has an experimental python interface, so this project is relevant again

(Note that munin is a popular open source graphing tool for network administrators, so this name isn't really usable)

Possible Mentors:

  • Pablo

Description:

Intuitive and interactive GUI, with priority in usability over available features and flexibility, based on what users should see — not on what software does internally. To there are two roads for this:

1. Create bindings to the core hugin library to a high level scripting language such as python, ruby or maybe lua. Use these to write the GUI. 2. Use the core libary directly and write the GUI in C++. While this might be easier from the start, a GUI written in a scripting language has major benefits for the long term (rapid development, extensibility etc.).

It would be nice to have a preview/renderer in OpenGL, but I fear that a new GUI and OpenGL acceleration is too big for a single GSOC project. If you are an experienced developer (know and have used all the required technology on non-trivial projects) it might be possible though.

Skills:

  • C++
  • Scripting language (python, ruby, lua?)
  • Qt4
  • OpenGL

Discussion:

how about relaxing the skills, and instead of fixing on Qt4 work on a combination of high level scripting language and widgets? Phythons + wxWidgets or Lua + wxWidgets come to my mind. Right now we have the option as both wxWidgets and Qt are in trunk, and if the student codes those widgets that require speed in OpenGL so that they integrate into such an API, any scripter can then leverage them for the fast and efficient design of GUI. Yuval 12:50, 4 March 2008 (CET)

I think the GUI + OpenGL is a bit too much for a single project. A separate project for OpenGL acceleration would make more sense, I fear. I'd really like to see a scripting language interface to the hugin core library. --pablo 08:47, 19 March 2008 (CET)

enblend-enfuse zenith/nadir and more

Primary Mentor

Andrew Mihal

Description

  • implementing blending of the zenith and nadir in Enblend and Enfuse
    The best way I can think of to do this is to use a cubic projection and provide all cube faces to the tool simultaneously. All of the boundaries will be horizontal or vertical lines, which will fit in well with the SKIPSM-based pyramid algorithms currently in use. Enblend and Enfuse can share the same pyramid code.
  • Enblend would additionally need a nearest feature transform algorithm that works on the cube surface for mask generation. I am not aware of a published algorithm to solve this.
  • Improve the seam optimization algorithm in Enblend.
  • GUI hooks: an API to a higher anstraction level language such as Python+wxWidgets or Lua+wxWidgets, with a real time interface into the enfuse functionality so that when changing weights with a sliders a preview can be generated in near real time.
  • multi-step fusing and alternative, user controlled, weighting (sigma).

Skills

Admission Test

A potential applicant for this project should first try to add boundary conditions to the localVarianceIf function in enfuse.h.

OpenGL accelerated hugin preview

Update: This was completed as part of GSoC 2008 'fast preview' and has been available since Hugin 0.8.0

Primary mentor

Discussion

There is a nice proof of concept for this idea created as a student project at Hallam University.

Skills

Ghost removal for enfuse

Update: This was completed as part of GSoC 2009 and will be available in Hugin 2010.0

Primary mentor

Discussion

Thanks to SoC2007 project Anti Ghosting, hugin has ghost removal when merging bracketed series to HDR, but this isn't currently available for enfuse.

Skills

Masking in GUI

Update: This has been completed independently of GSoC and will be available with Hugin 2010.2.0

Primary mentor

Discussion

Currently masking to remove unwanted objects from panoramas has to be added to alpha channels which is laborious, this could be integrated into the hugin GUI via a simple vector/polygon editor. It would need to be simpler to use than the equivalent process with Inkscape

Skills

Batch processing

Update: This was completed in GSoC 2008 and has been available since Hugin 0.8.0. However there is still no GUI system for batch processing the creation of Hugin projects

Primary mentor

Discussion

SoC2007 project Batch Processing was an unadopted project last year.

Now that hugin has a Makefile based stitching process, it is much clearer what a batch stitcher would look like. This would need to support 'plug-in' Makefile add-ons for additional views, QTVR generation, uploading, sharpening etc... as well as a queue/spool manager.

Some explanation of this existing 'Makefile process': Currently hugin stitches projects by writing out the instructions (remapping and blending with nona, enblend, enfuse, exiftool etc...) into a standard Makefile and then running 'make -f project.pto.mk all clean' - This happens all in one go when a user clicks 'save and stitch', but alternatively the user has a choice to run 'make' manually.

So batch processing already exists for users familiar with the command-line, ie. we can run 'make -j 16' to run processes in parallel or use a tool such as distmake to queue jobs over the network on multiple machines.

A plugin system also already exists to a certain extent in that these Makefiles can be extended with other makefiles that add additional targets, one for creating QTVR output is described here.

A batch stitcher would need to enable some or all of this functionality for normal users who have an interest in photography rather than tinkering with script files.

Skills

Lens Database

note 2011: the lens database now exists as lensfun which is mature and in use by ufraw and rawstudio, Hugin still hasn't implemented lensfun support and could potentially be used to collect and submit 'good' lens parameters. note also calibrate_lens was created in GSoC 2009 which can also be used in this project

Primary mentor

Alexandre Jenny

Discussion

Lens database of accumulated knowledge on distortion, CA, response curve. The lens database aim : using stitching to feed an huge camera / lens database for distortion correction, vignetting correction, ca, etc. Then, it will be possible to reuse this database to correct single pictures too.

There are two ways to generate such a database:

  1. Collect high quality photographs centrally, manually calibrate them and assemble a database that matches parameters with image EXIF data - This is how the PTLens database was collected.
  2. Use the fact that stitching software (eg hugin, but there are other tools using the same lens correction model) effectively calibrates lenses every time a project is stitched - This information could be collected, validated and averaged centrally to automate the creation of a lens database.

This project would investigate the second technique. It can be divided into several subparts which are independent. First part theory :

  • Distortion parameters accumulation : what measure is important, what can be such a measure reproduced, what criterion should be use in a stitch to be sure that the optimized parameters are good for the database.
  • CA : find a reproducible way to measure it. Study the variation of the CA with zoom parameters ( varying with aperture ? with focal ? )
  • response curve : this parameter is less dependent on the lens, it's more sensor / maker related.

Second part : statistic accumulation

  • design a standard format to store every parameter,
  • create and manage a central repository of lens database.

Skills

General : in computer vision field Coding : C / C++ / stl Math : optimizing technics

tCA Correction

Update: tca_correct has been implemented independently of GSoC and has been available since Hugin 0.7.0, however there is no GUI. note that tca_correct is slow and speeding it up and integrating it with the GUI and stitching with nona would be very useful

Primary Mentor

Description

An utility to determining the Transverse Chromatic Aberration (tCA) of an image so it can be corrected.

CA is when the color channels in an image do not appear to line up well with each other. tCA is when colors are in focus but placed adjacent to each other. tCA can be corrected with Panorama Tool's Radial Shift. More info on Chromatic aberration

Correcting tCA using PanoTools' Radial Shift, requires knowing the radial correction coefficients a, b and c for your lens. PTOptimizer can be used to calculate the coefficients if known amounts of shift for each channel is known from the center to the corners of the image.

This utility will produce these coefficients to be used in the lens database.

Assume image is 180 deg diagonal filed of view. Regardless if it is rectangular or fisheye of any field of view. Map image to the nadir of a sphere, taking into account any offset from center of image.

The bottom of the remapped image is the center of the original. Moving up the remapped image is the same as moving away from the center of the original image.

Now we no longer have to deal with radial shift but vertical shift. The image can be divided into a number of strips going across the entire width of the image. Each strip represents a circle a different distance from the center. The strips are shifted up and down the same amount over the entire image width. Each strip can have the channel data adjusted up and down to find the best match to eliminate CAt at that position. One channel does not move while the other two are adjusted. The best result of the two channels are recored for every strip. Sub pixel accuracy can be achieved by increasing the height of the result when creating the remapped image.

This can be done manually in real-time giving visual feedback of the channels alignment. Or it can be done programmability doing a difference between the two channels and looking at the histogram. The blackest result has the best alignment. If accurate automated results are possible then there is no need for the manual method.

Current methods of correcting tCA are fairly complicated. Though it can be completely automated using a patched version of align_image_stack from the hugin project, this technique works very well but could use some optimisation work to make it faster (and the align_image_stack patch needs integrating).

Also according to this post by the author of ufraw, the tCA correction in the latest dcraw is done before bayer interpolation (note that my interpretation of this code was the opposite , so what do I know - Bruno). This tool should also create parameters for use by dcraw.

Extend hugin's output options for stitching

update: tiffcp can be used for assembling multi-layer TIFF files, however there would be some value to adding native PSD/PSB support to vigra which is the library used by both Hugin and enblend for image file IO

Primary Mentor

Zoran Mesec

Description

Currently hugin offers jpg, png, tiff and layered tiff output. Suggested are other formats like layered psd; swf , java and html generation for web publishing. Despite of all software projects efforts, stitched panoramas have some errors (like ghost objects) that can be easily removed in Photoshop or Gimp with layered output. A lot of open source graphical software already offers proposed file formats, but despite it is not an easy task that can be implemented in a week and therefore it is suitable to be a standalone SoC project

Note that this is potentially complimentary and orthogonal to the batch stitching project described above, Makefiles are a good way to provide additional output types and quite suitable for HTML generation or even uploading.

hugin and other tools generate Cropped TIFF files, with each 'layer' as a separate file. The workflow described above for using this data in image editors really requires file formats that contain all layers in a single file such as multilayer TIFF or PSD. Command-line tools for assembling these files are currently either too buggy to use or non-existent and fixing this situation would be a good project in itself.

Skills

Utility for creating a Philosphere

update: note that during the proposal stage this project was discarded as being too small, also mathmap is a better tool for implementing this kind of thing

Primary Mentor

Zoran Mesec

Description

Goal: A utility integrated into hugin for creating images from panoramas, suitable for printing.

Tasks:

  • Implementation of described process using modified PT scripts and them combining partial results to final image
  • Integration into hugin

Discussion: There are several possible outputs that can be used to print a panorama. Examples: polyhedron, "orange slices", rhombicuboctahedron...

Resources:

Skills

Required knowledge or interest in:

  • panoramic imaging
  • C++ and scripting languages development skills.

Sky identification

Update: this was completed as part of GSoC 2008 and has been available since Hugin 0.8.0

Primary Mentor

Description

Feature matching tools have a tendency to identify potentially useful features in clouds, this is not surprising given the amount of useful contrast and detail in a typical cloud.

This tendency is problematic when matching images for panorama stitching, as clouds can move significant distances even between photos taken seconds apart.

This project would find a suitable method of identifying areas of sky based on a search of existing literature, and implement it as a standalone tool capable of pruning Control points from existing panotools/hugin format script files.