Difference between revisions of "SoC 2008 ideas"
(→munin — interactive openGL based GUI)
|Line 138:||Line 138:|
== Lens Database ==
== Lens Database ==
Revision as of 23:33, 30 March 2011
- 1 Introduction
- 2 Development style
- 3 Possible Projects
- 3.1 munin — interactive openGL based GUI
- 3.2 enblend-enfuse zenith/nadir and more
- 3.3 OpenGL accelerated hugin preview
- 3.4 Ghost removal for enfuse
- 3.5 Masking in GUI
- 3.6 Batch processing
- 3.7 Lens Database
- 3.8 tCA Correction
- 3.9 Extend hugin's output options for stitching
- 3.10 Utility for creating a Philosphere
- 3.11 Sky identification
If you are a student willing to participate in The Google Summer of Code 2008, do as suggested below:
- find out what ideas we have for SoC projects this year (read below);
- make up your mind, if you want to pick one of those tasks or if you have your own idea;
- join our community at hugin-ptx;
- introduce yourself and tell us about your plans and wishes.
- add your proposal to the student proposal page
Most of the projects below are related to Hugin, and some also relate to Panotools or tlalli. Hugin is mostly written in C++, and uses the VIGRA image processing library to support different types of images (for example, 8bit, 16bit and float (HDR) images). The core functionality is implemented in a platform independent C++ library, which is used by the GUI based on wxWidgets toolkit, and the command line programs (nona, fulla). We also very much welcome contributions to Enblend/Enfuse.
The development of the projects should take place in a separate branch of the projects CVS (or SVN) repository. Communication with the mentors should usually happen through the appropriate development mailing list. All code should work on the major platforms supported (Linux, OSX, Windows).
Some of the SoC2007 projects proposals were not done last year and are potential projects for this year:
SoC2007_projects#Automatic feature matching for panoramic imagesCompleted
SoC2007_projects#Interactive panoramic viewer (This was completed but there is further possible work to be done)Completed in GSoC 2009
- SoC2007_projects#Processing of very large images (using the VIPS framework, or even GEGL if ready)
- SoC2007_projects#Architectural Overhaul of Panotools
munin — interactive openGL based GUI
note 2011: the opengl preview has since been implemented in Hugin. Hugin now has an experimental python interface, so this project is relevant again
(Note that munin is a popular open source graphing tool for network administrators, so this name isn't really usable)
Intuitive and interactive GUI, with priority in usability over available features and flexibility, based on what users should see — not on what software does internally. To there are two roads for this:
1. Create bindings to the core hugin library to a high level scripting language such as python, ruby or maybe lua. Use these to write the GUI. 2. Use the core libary directly and write the GUI in C++. While this might be easier from the start, a GUI written in a scripting language has major benefits for the long term (rapid development, extensibility etc.).
It would be nice to have a preview/renderer in OpenGL, but I fear that a new GUI and OpenGL acceleration is too big for a single GSOC project. If you are an experienced developer (know and have used all the required technology on non-trivial projects) it might be possible though.
- Scripting language (python, ruby, lua?)
how about relaxing the skills, and instead of fixing on Qt4 work on a combination of high level scripting language and widgets? Phythons + wxWidgets or Lua + wxWidgets come to my mind. Right now we have the option as both wxWidgets and Qt are in trunk, and if the student codes those widgets that require speed in OpenGL so that they integrate into such an API, any scripter can then leverage them for the fast and efficient design of GUI. Yuval 12:50, 4 March 2008 (CET)
I think the GUI + OpenGL is a bit too much for a single project. A separate project for OpenGL acceleration would make more sense, I fear. I'd really like to see a scripting language interface to the hugin core library. --pablo 08:47, 19 March 2008 (CET)
enblend-enfuse zenith/nadir and more
- implementing blending of the zenith and nadir in Enblend and Enfuse
- The best way I can think of to do this is to use a cubic projection and provide all cube faces to the tool simultaneously. All of the boundaries will be horizontal or vertical lines, which will fit in well with the SKIPSM-based pyramid algorithms currently in use. Enblend and Enfuse can share the same pyramid code.
- Enblend would additionally need a nearest feature transform algorithm that works on the cube surface for mask generation. I am not aware of a published algorithm to solve this.
- Improve the seam optimization algorithm in Enblend.
- GUI hooks: an API to a higher anstraction level language such as Python+wxWidgets or Lua+wxWidgets, with a real time interface into the enfuse functionality so that when changing weights with a sliders a preview can be generated in near real time.
- multi-step fusing and alternative, user controlled, weighting (sigma).
A potential applicant for this project should first try to add boundary conditions to the localVarianceIf function in enfuse.h.
OpenGL accelerated hugin preview
Update: This was completed as part of GSoC 2008 'fast preview' and has been available since Hugin 0.8.0
There is a nice proof of concept for this idea created as a student project at Hallam University.
Ghost removal for enfuse
Update: This was completed as part of GSoC 2009 and will be available in Hugin 2010.0
Masking in GUI
Update: This has been completed independently of GSoC and will be available with Hugin 2010.2.0
Currently masking to remove unwanted objects from panoramas has to be added to alpha channels which is laborious, this could be integrated into the hugin GUI via a simple vector/polygon editor. It would need to be simpler to use than the equivalent process with Inkscape
Update: This was completed in GSoC 2008 and has been available since Hugin 0.8.0. However there is still no GUI system for batch processing the creation of Hugin projects
SoC2007 project Batch Processing was an unadopted project last year.
Now that hugin has a Makefile based stitching process, it is much clearer what a batch stitcher would look like. This would need to support 'plug-in' Makefile add-ons for additional views, QTVR generation, uploading, sharpening etc... as well as a queue/spool manager.
Some explanation of this existing 'Makefile process': Currently hugin stitches projects by writing out the instructions (remapping and blending with nona, enblend, enfuse, exiftool etc...) into a standard Makefile and then running 'make -f project.pto.mk all clean' - This happens all in one go when a user clicks 'save and stitch', but alternatively the user has a choice to run 'make' manually.
So batch processing already exists for users familiar with the command-line, ie. we can run 'make -j 16' to run processes in parallel or use a tool such as distmake to queue jobs over the network on multiple machines.
A plugin system also already exists to a certain extent in that these Makefiles can be extended with other makefiles that add additional targets, one for creating QTVR output is described here.
A batch stitcher would need to enable some or all of this functionality for normal users who have an interest in photography rather than tinkering with script files.
note 2011: the lens database now exists as lensfun which is mature and in use by ufraw and rawstudio, Hugin still hasn't implemented lensfun support and could potentially be used to collect and submit 'good' lens parameters. note also calibrate_lens was created in GSoC 2009 which can also be used in this project
Lens database of accumulated knowledge on distortion, CA, response curve. The lens database aim : using stitching to feed an huge camera / lens database for distortion correction, vignetting correction, ca, etc. Then, it will be possible to reuse this database to correct single pictures too.
There are two ways to generate such a database:
- Collect high quality photographs centrally, manually calibrate them and assemble a database that matches parameters with image EXIF data - This is how the PTLens database was collected.
- Use the fact that stitching software (eg hugin, but there are other tools using the same lens correction model) effectively calibrates lenses every time a project is stitched - This information could be collected, validated and averaged centrally to automate the creation of a lens database.
This project would investigate the second technique. It can be divided into several subparts which are independent. First part theory :
- Distortion parameters accumulation : what measure is important, what can be such a measure reproduced, what criterion should be use in a stitch to be sure that the optimized parameters are good for the database.
- CA : find a reproducible way to measure it. Study the variation of the CA with zoom parameters ( varying with aperture ? with focal ? )
- response curve : this parameter is less dependent on the lens, it's more sensor / maker related.
Second part : statistic accumulation
- design a standard format to store every parameter,
- create and manage a central repository of lens database.
General : in computer vision field Coding : C / C++ / stl Math : optimizing technics
Update: tca_correct has been implemented independently of GSoC and has been available since Hugin 0.7.0, however there is no GUI
An utility to determining the Transverse Chromatic Aberration (tCA) of an image so it can be corrected.
CA is when the color channels in an image do not appear to line up well with each other. tCA is when colors are in focus but placed adjacent to each other. tCA can be corrected with Panorama Tool's Radial Shift. More info on Chromatic aberration
Correcting tCA using PanoTools' Radial Shift, requires knowing the radial correction coefficients a, b and c for your lens. PTOptimizer can be used to calculate the coefficients if known amounts of shift for each channel is known from the center to the corners of the image.
This utility will produce these coefficients to be used in the lens database.
Assume image is 180 deg diagonal filed of view. Regardless if it is rectangular or fisheye of any field of view. Map image to the nadir of a sphere, taking into account any offset from center of image.
The bottom of the remapped image is the center of the original. Moving up the remapped image is the same as moving away from the center of the original image.
Now we no longer have to deal with radial shift but vertical shift. The image can be divided into a number of strips going across the entire width of the image. Each strip represents a circle a different distance from the center. The strips are shifted up and down the same amount over the entire image width. Each strip can have the channel data adjusted up and down to find the best match to eliminate CAt at that position. One channel does not move while the other two are adjusted. The best result of the two channels are recored for every strip. Sub pixel accuracy can be achieved by increasing the height of the result when creating the remapped image.
This can be done manually in real-time giving visual feedback of the channels alignment. Or it can be done programmability doing a difference between the two channels and looking at the histogram. The blackest result has the best alignment. If accurate automated results are possible then there is no need for the manual method.
Current methods of correcting tCA are fairly complicated. Though it can be completely automated using a patched version of align_image_stack from the hugin project, this technique works very well but could use some optimisation work to make it faster (and the align_image_stack patch needs integrating).
Also according to this post by the author of ufraw, the tCA correction in the latest dcraw is done before bayer interpolation (note that my interpretation of this code was the opposite , so what do I know - Bruno). This tool should also create parameters for use by dcraw.
Extend hugin's output options for stitching
Note this project would need to be complementary to any work done on SoC_2010_ideas#Makefile_system_and_Detection_of_panoramas. i.e. the framework for creating it doesn't yet exist.
Currently hugin offers jpg, png, tiff and layered tiff output. Suggested are other formats like layered psd; swf , java and html generation for web publishing. Despite of all software projects efforts, stitched panoramas have some errors (like ghost objects) that can be easily removed in Photoshop or Gimp with layered output. A lot of open source graphical software already offers proposed file formats, but despite it is not an easy task that can be implemented in a week and therefore it is suitable to be a standalone SoC project
Note that this is potentially complimentary and orthogonal to the batch stitching project described above, Makefiles are a good way to provide additional output types and quite suitable for HTML generation or even uploading.
hugin and other tools generate Cropped TIFF files, with each 'layer' as a separate file. The workflow described above for using this data in image editors really requires file formats that contain all layers in a single file such as multilayer TIFF or PSD. Command-line tools for assembling these files are currently either too buggy to use or non-existent and fixing this situation would be a good project in itself.
Utility for creating a Philosphere
update: note that during the proposal stage this project was discarded as being too small, also mathmap is a better tool for implementing this kind of thing
Goal: A utility integrated into hugin for creating images from panoramas, suitable for printing.
- Implementation of described process using modified PT scripts and them combining partial results to final image
- Integration into hugin
Discussion: There are several possible outputs that can be used to print a panorama. Examples: polyhedron, "orange slices", rhombicuboctahedron...
- experiments with mathmap by User:Seb Przd - Particularly note the 'conformal cube'.
Required knowledge or interest in:
- panoramic imaging
- C++ and scripting languages development skills.
Update: this was completed as part of GSoC 2008 and has been available since Hugin 0.8.0
Feature matching tools have a tendency to identify potentially useful features in clouds, this is not surprising given the amount of useful contrast and detail in a typical cloud.
This tendency is problematic when matching images for panorama stitching, as clouds can move significant distances even between photos taken seconds apart.
This project would find a suitable method of identifying areas of sky based on a search of existing literature, and implement it as a standalone tool capable of pruning Control points from existing panotools/hugin format script files.