Historical:SoC 2008 student proposals

From PanoTools.org Wiki
(Redirected from SoC 2008 student proposals)
Jump to navigation Jump to search


  • name / university / current enrollment
  • short bio / overview of your educational background
  • did you ever code in C or C++, yes/no? please provide examples of code.
  • do you photograph panoramas? please provide examples.
  • do you make other use of hugin/panotools than for stitching panoramas? please describe and show examples.
  • were you involved in hugin/panotools development in the past? what was your contribution?
  • were you involved in other OpenSource development projects in the past? which, when and in what role?
  • why have you chosen your development idea and what do you expect from your implementation?
  • how much time you plan to invest in the project? (we expected full time 40h/week but better make this explicit)
  • please provide a schedule of how this time will be spent on subtasks of the project. While this is only preliminary, be aware that at the beginning of the project you will be required to provide a detailed plan, and during the project you will issue weekly progress reports against that plan.

James Legg, OpenGL Accelerated Hugin Preview

This project was accepted, see SoC 2008 Project OpenGL Preview for details.

  • I am James Legg, a 2nd year undergraduate in Computer Science and Mathematics at The University of York (UK).
  • At university I have taken modules in human computer interaction, artificial intelligence, computer graphics and visualisation, theory of computation, algorithms and data structures, group & ring theory, linear algebra, vector calculus, and analysis among others. I have A-Levels in Physics, Maths, and Further Maths, and a Critical Thinking AS-Level.
  • I have experience in C++ and OpenGL, for example I created a program that displays a procedural random terrain, including multiple layers of textures, using OpenGL, SDL (for platform independent event handling), and libnoise (for coherent noise generation). The source is available from http://lankyleggy.members.beeb.net (Download megamap.tar.gz)
  • I do photograph panoramas and stitch them with Hugin, there are a few examples at: http://lankyleggy.members.beeb.net/panoramas/
  • I also use Hugin for aligning bracketed photos to create HDR images, and I use enblend (and the Gimp) to create seamless textures from photographs. An HDR example and a tonemap are on the same page as linked above.
  • I have only recently started contributing to Hugin, see http://groups.google.com/group/hugin-ptx/browse_thread/thread/f94c584095fcb9d0.
  • I have not been involved in other open source development projects beyond use, evangelising and bug reporting.
  • After completing this project, I expect to have a new optional preview display, acting largely like the existing one. This should instead use OpenGL, and a grid-like mesh to remap the images. It will only be approximate, but still functional. It will not require OpenGL shaders, as the hardware for this is not available to most users, although I would like to make it easy for someone to add this functionality at a later date. The main purpose is to speed up the preview process. Unfortunately, it will not have the accuracy and logarithmic HDR display of the current preview.
  • I would like to do this project since I have an interest in computer graphics and human interface design.
  • I will invest at least 40 hours a week after the first 3 weeks of the project. Unfortunately the 3rd week of Coding Phase 1 is an exam week, However I will attempt to make up the time before the official coding start date.
  • I plan to split the project into the following stages:
    1. Beginnings: I will derive a new panorama preview from Hugin's existing one, set up an OpenGL context for it, and add extra preferences into Hugin to control selection of preview types, and the amount of texture memory and mesh resolution to use. I plan to have this done before the official "Coding starts" date.
    2. Texture creation: After this each image would be scaled appropriately and placed in texture memory.
    3. Photometric Alignment: OpenGL (without shaders) only directly supports linear colour transformations, however after this stage, white balance correction and (approximate) exposure correction will be used to display the images.
    4. Mesh creation: A grid-like mesh will be created for each displayed image. This will then be transformed under distortion correction, input lens projection and output projection, to get the coordinates used to make the display on the preview window. This will require using and extending the current remapping code. I will find a way to track duplicates (e.g. fisheye projections with a FOV > 360 degrees), and to split the mesh across discontinuities in the projection. The final output would then be a linear approximation between the mesh points (along continuous mappings), which is then handled by OpenGL. The results of the mesh transformations will be stored in memory so that changing parameters does not cause unnecessary costly calculations. I expect to have started this by the end of coding phase 1, but I expect this will be the most time consuming stage.
    5. Usability improvements: This will add extra features, e.g. a feature show the outline of an image in the preview, which will hopefully provide quicker identification of images.
    6. Performance enhancing and testing: I will try to improve the performance of my code, and fix any bugs I may have. I will call for testers on the mailing list during this stage, and I may add additional requested features if I have the time. I will also make sure that the documentation is completely up-to-date.

If this feature is not wanted, or mentors are not available, I have an alternative proposal: I would like to make a cube map projection, since cube maps are the most common way of handling static reflections in real-time computer graphics, they should work well with interactive panorama viewers such as FreePV, and it has been requested before.

I would keep the definition of the faces that make the projection generic enough (perhaps specified in a script) to allow shapes other than cubes to be defined (I see there is interest in foldable panoramas and other shapes, including ceating a Pilosphere). This will require work on making enblend and enfuse blend across distinct faces. I don't know any algorithms that can do this well, but I would be willing to research it and experiment.

Marko Kuder, Utility for Creating A Philosphere

Here is the application I intend to submit to Google, combined with the information I gave on the mailing list it should answer the questions in the template.


Hugin is a software tool for simple design of panoramic images. It's main function is to aid in combining multiple photographs, taken from a point in space in different directions, by finding common control points in neighbouring views and applying necessary transformations so that the images can be joined into a single, bigger picture. This can of course become an even tougher problem as photographs usually suffer from irregular lighting and Hugin is able to compensate for that also by color and light balancing.

The ultimate application of these functions is the creation of 360 degree panoramas. But such photographs have an innate problem - how to represent them in a natural and more attractive way than long and sometimes oddly curved pictures. The function that is yet missing from Hugin and would solve this problem is the ability to transform an image in such a way that it can be applied to a spherical body or, in practice, the automatic creation of models which can be printed and then folded into a polyhedron or sphere. The development and addition of this feature to Hugin would be my assignment on this project.

The original idea description can be found at: http://wiki.panotools.org/SoC_2008_ideas#Utility_for_creating_a_Philosphere


Printing panoramas as foldable cut-out models has been somewhat popularized by Philippe Hurbain, a French electronics engineer and a panoramic photography enthusiast (his homepage is accessible at http://www.philohome.com). The rhombicuboctahedron, which he often uses for displaying his panoramic photographs, was nicknamed »Philosphere«, after Hurbain's first name. Even though printing panorama models has little technical potential, it is an idea that could bring panoramic photography closer to the general public, as it displays the photographs in a more interesting way. For this reason and because image transformations are a problem domain which can be handled with my education without much practical work experience (which I lack), I decided to apply for this specific idea. This way I could get some much needed programming experience, which is often expected when applying for a job, familiarize myself with working on a bigger project, which I would have to get to know without being involved in it's previous development, and work on something that is interesting to me. Other projects which are part of Google Summer of Code mostly require special skills or their project ideas are a very integrated part of the software they are based on. Since I plan to attend classes and pass my exams in June, I had to be realistic about my abilities and choose a project that I could handle in the time I would have. I was also a bit late in finding out about GSoC, so I have chosen only one project, this one, and spent the free time I had recently getting to know Hugin, the mentoring organization's project (which I haven't used before) and making sure I was fit for the job.


While the basic idea of projecting an image on a sphere seems simple, it can get quite complicated if we want the projection to be as realistic as possible. The problem is that we have many types of cameras with different lenses and consequently multiple types of projections e.g. rectilinear, equirectangular, cylindrical and fisheye. Each of these should be handled differently and considering the fact that what we are working with are panoramas constructed out of multiple photographs, the simple problem can be expanded quite far. I do not intend to pretend that the original problem isn't trivial. If we had to map multiple non-stiched overlapping images to a single body, it would be difficult, but Hugin already contains the necessary code to join these images into one large panorama, so our task becomes easier. We can map an image to a sphere simply by converting it to so called »orange slices« (as shown on http://www.philohome.com/rhombicuboctahedron/rhombicuboctahedron.htm), which can be done by transforming each horizontal line of a slice into a shorter one, so it fits into the set form. There is a problem with this approach though. The final result depends greatly on the projection of the original image and the number of slices. The more slices we choose, the more equirectangular should the original photo be (as opposed to a fisheye projection), but such a model can be very hard to fold when printed. The alternative to orange slices which can be easier to assemble is the use of polyhedra like a rhombicubioctahedron or icosahedron, although model images of these could prove to be harder to construct, as multiple different transformations would have to be applied to separate parts of the original image. Some software which offers 3D panorama model construction already exists, like PTGui (http://www.ptgui.com) and IP-slicer script (http://www.bruno.postle.net/neatstuff/ip-slicer), but the former, which is a commercial application, is only able to construct a rhombicubioctahedron from a 360x180 degree panorama and the latter only converts an image into orange slices. Both lack additional user options, such as projection correction and incomplete panorama support, which I would like to include in my project. In my recent discussion with Hugin's group, Bruno Postle - one of it's members - kindly pointed out some solutions to similar problems which I could use as basis for my project. One is his image patterner (http://www.bruno.postle.net/neatstuff/image-patterner), which allows mapping images to arbitrary shapes (but by his words has some problems with performance and is written in perl, so it can't be used directly), and the second is the ability to specify arbitrary 3D meshes as foldable models by seam definition in the well known 3D modelling application Blender (http://www.blender.org/documentation/htmlI/x5336.html). Both present a possibility of parametrization of goal shapes and as such greater flexibility for further expansion of user options. While this would be great, I'm not sure at the moment if it could be developed in the time at hand, but it absolutely is an idea that I would consider, if not otherwise as further development after GSoC.


Pre-summer phase:

At first I intend to get to know Hugin and it's source code as much as I need to visualize my planned code in it (including planning the GUI modifications). After that I should make some research and study the mathematical basis for my problem, so I could define the required code. At the start of the first coding phase, in June, I will have some exams to pass, so it could be expected that I will be less productive for a period of 2-3 weeks in that month, but I intend to get a head start on the project by beginning coding before the official start of GsoC coding phase one and I plan to already have something to show the mentoring organization at that time.

Coding phase one:

After my exams I will most probably be able to commit my full time to the project. Till the 14th of July, when mid-term evaluations begin, I intend to complete the inclusion of at least one of the planned transformations in Hugin with full functionality, so I could show some practical results.

Coding phase two:

After mid-term evaluations I would add as many additional functions and user options as possible, before doing the final bug fixes and polishing up the finished code and GUI to complete the project.


I was born and raised in Trbovlje, a small town in Slovenia and at the age of 22 I'm in the 4th year of the Computer Software university program at the Faculty of Computer and Information Science in Ljubljana, the capital of our country. My main interest is in computer graphics, although my dream would be to work in a small company designing software and electronic devices of various types, because I like inventing new things. From my education I have a lot of experience in Java programming and some in Python, C and C++, which would be my programming language of choice, for computer graphics in combination with OpenGL. Recently I have been working with our school's AI lab by doing research in the field of search algorithms and heuristics. I'm not much into photography, which may be strange considering the project I chose for my application, but I think my problem domain is separated enough from Hugin's main functionality that my lack of expertise in the area shouldn't be a problem. Google Summer of Code appears as an opportunity to prove to myself that I can work on real software projects and I hope I will be given the chance to show the same thing to Hugin's project team.

Michael Ploujnikov, Automatic Test Suite


This project is to develop an automated test suite to perform functional (black box) and structural (white box) testing of the Hugin internals as well as the user interface. The goal is to increase developers' confidence in the current code and future changes to the code as well as to improve the user experience of the application.


  • a suitable framework will either be chosen from among existing frameworks or a new one will be developed based on analysis of the project needs and discussion with the core developers
  • the test suite will be integrated with the built system
  • a number of useful test cases will be created
  • procedures for creating unit tests, running tests and analyzing test results will be documented

Detailed Description

My name is Michael Ploujnikov and I am currently finishing up the last year of a Computer Science program at the York University of Canada.

I have written a lot of C programs for school assignments, just learning the language, personal projects and free/opensource software projects (FOSS)[1]. Most of my C++ programming experience is in working with FOSS projects.

I shoot hand-held panoramas some of which can be viewed in my public photo gallery[2]. Making even "regular" panoramas with Hugin is so much fun that I have not had a chance to try anything fancy.

I have not been involved with Hugin in the past. However, I have been involved in programming roles with other free/opensource software projects. My official resume[3] has some more details about my involvement.

I want to work on this project for a number of reasons. I strongly believe in test-driven development and wish to make it a possible (to some extent) for the current and future developers of a really fun application - Hugin! I am currently taking a university course[4] on software testing and would like to apply the learned concepts while they are fresh in my mind. I want to learn how Hugin works and writing test for the code base is arguably one of the best ways to do so. I want to help the experienced Hugin/panorama developers who volunteer their free time to work on this application by doing a rather tedious and boring task of setting up an automatic test suite for them. As a result, I hope the developers will have more time to spend on less "trivial" tasks. Finally, I want to get a feel for some unit testing frameworks (other than JUnit) that could be useful for me in the future career and FOSS projects.

I'm committed to work at least 40 hours per week for the duration of the GSoC on this project except for the first week in April when I will be busy with final exams, and another one week (to be determined) when I will be gone on vacation.


In Chronological order I will

  • Finish school exams
  • make sure I can run Hugin compiled from source
  • Research existing testing frameworks specifically for the C/C++ languages and wx/GTK GUI toolkits
  • Research testing frameworks that work well with cmake
  • Make a list of test suite features I would like to have
  • Modify the test suite feature list by discussing it with core Hugin developers to address their needs
  • Decide (with core Hugin developers) whether an existing framework (could be more than one) is suitable for Hugin
  • setup the existing framework(s) with stub tests or start creating a new framework (depending on how the previous decision goes)
  • write some simple tests
  • add concise usage documentation for the test framework on the wiki
  • integrate the existing tests (I only found stuff in hugin/trunk/src/hugin1/tests)
  • think about the testing techniques learned in school and use an appropriate one to come up with a "good" set of tests cases
  • update the documentation because at this point the testing framework will probably have been modified
  • write some tests for currently open bugs
  • search the google group for proposed test cases and try to integrate them into the suite
  • keep adding more tests until time runs out!

Onur Kucuktunc, Automatic Feature Matching for Panoramic Images


Aim of this Summer of Code project is to develop an efficient method for automatically matching local image features. Automatic image stitching, the process of combining images to form panoramic picture, can be performed in two steps: first step is to extract feature points (scale, rotation, illumination invariant), which is handled by "Feature Detection" project. Second step, also the goal of this project, is to find a set of corresponding features between images by using a matching algorithm.

Cover trees (a nearest neighbor matching algorithm) and RANSAC outlier pruning (an iterative method for fitting models in a data set with many outliers) will be implemented in order to find the correspondence between images. EXIF heuristics can also be used for improving the speed of feature matching. The time a picture was taken, and its filename give some information about the order of images in a panorama.



In a panorama photo stitcher software, an automatic feature matching algorithm needs to find the correspondence between feature points and images. The problem is to develop a robust and efficient matching algorithm.


Using nearest neighbor matching and RANSAC outlier with EXIF heuristics seem to increase robustness and efficiency of the feature matching method. Although the complexity of matching n images with a straight-forward approach is O(n^2), we can reduce it to O(nlogn) by using a nearest neighbor algorithm [1]. Cover tree [2] is a remarkably fast nearest neighbor algorithm, which have documentation and implementation in [3]. RANSAC (Random Sample Consensus Algorithm) is an iterative method for outlier pruning. Fischler and Bolles propose and discuss RANSAC model fitting with image analysis in [4]. There is also an overview and pseudo-code for RANSAC in wikipedia [5].

EXIF heuristics can also improve the efficiency and effectiveness of the method. If available, the time a picture was taken, its filename (like DSC_0001.jpg), camera orientation, etc. are useful informations which can both decrease the running time and increase its correctness. Geolocation information in EXIF format also give clues whether or not images belong to the same panorama. "Exchangeable image file format:wikipedia" document [6] gives details about the format.

Performance Measures

We need to be sure that applied algorithms really improves robustness and speed of the method. Therefore, some comparisons should be made on a dataset, which we manually give matching feature points. Correctly retrieved matching points (hit), retrieved but not correct ones (false positive), and the matching points we could not retrieve (miss) could be plotted to a ROC curve. By this way, we can improve the performance by selecting the right methods and adjusting parameters to the value which give highest hit rate. For example, we can compare k-d trees, which Brown and Lowe used in [7], with cover trees. Running time of the algorithm is also important for our purpose. This can be calculated both by analyzing the complexity of the algorithms, and recording running times in each test.

Time Schedule

I plan to invest at least 30h or more per week to this project. I will be on vacation for 4-5 days, and another week I will probably be busy with moving my house. But I will balance it by working hard in order not to fall behind schedule. My planned actions will be as follows:

  • Before May 26
    • Getting to know the developers, mentors, reading documentation,
    • Getting used to hugin/panotools, your coding strategy, bug reporting and fixing,
    • Finding and/or generating representative datasets for testing and evaluation purposes,
    • Reading scientific material, understanding all the concepts of matching and pruning procedures,
    • Beginning documentation,
    • Deciding inputs and outputs of the project,
    • Making contact with the developer of Feature Detection part,
    • Making a survey to learn the habits of photographers for effective EXIF heuristics.
  • May 26 to July 7-14 (midterm evaluations)
    • Combining algorithms by using and adapting existing cover tree and RANSAC algorithms,
    • Adding heuristic to implementation, test if it really improves efficiency and effectiveness,
    • Preparing the libraries, standalone applications
    • Getting ready for mid-term evaluations
  • July 15 to August 11-18 or September 1 (final evaluations)
    • Testing with a large number of images, offering code for public review,
    • Improving heuristics and other parts with respect to the feedbacks and test results,
    • Completing documentations, preparing libraries, applications, etc.
    • Getting ready for final evaluations


  • A library for automatic feature matching which uses proposed algorithms,
  • Documentation created by using the doxygen tool,
  • Comparisons of the algorithms, test cases for running time analysis,
  • A standalone application (works in cross-platform) that uses the library,
  • Fully-commented source codes both for the library and Hugin application that uses the library.


I'm Onur Kucuktunc, a MSc student in computer engineering at Bilkent University, Turkey. I'm currently working on our Multimedia Database System (http://cs.bilkent.edu.tr/~bilmdg/), which includes lots of parts related to computer vision. I have taken Basics of Signals and Systems, Image Analysis, Computer Vision, and Pattern Recognition courses. I'm also a photographer.

I have chosen this topic and Hugin/panotools project because last semester for Computer Vision course, I worked on a project titled as "3D Face Reconstruction from 2D Images for Effective Face Recognition". We proposed a 3D face reconstruction method by automatically matching SIFT features, since most of the current implementations still require manual selection of feature points among different images. You can find additional information in my website [8].

I'm mostly experienced in Java and Matlab programming; but I have also written low-level, operating system-related codes in C, and graphic codes (using OpenGL) in C++. As an OSX user, I also believe I will be able to report and fix platform-dependent bugs in hugin. I have i18n and translation experience in some open-source projects, but it will be my first time to actually contribute as a developer.


  1. J.Beis, D. Lowe. "Shape indexing using approximate nearest-neighbor search in high-dimensional spaces."
  2. A.Beygelzimer, S.Kakade, J.Langford. "Cover Trees for Nearest Neighbor." ICML 2006.
  3. http://hunch.net/~jl/projects/cover_tree/cover_tree.html
  4. M.A.Fischler, R.C.Bolles. "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography." Comm. of the ACM 24
  5. RANSAC, http://en.wikipedia.org/wiki/RANSAC
  6. http://en.wikipedia.org/wiki/Exchangeable_image_file_format
  7. M.Brown, D.G.Lowe. "Recognizing Panoramas."
  8. Personal website, http://cs.bilkent.edu.tr/~onurk/

Marko Kuder, Batch Processing


Hugin is a graphical user interface using PanoTools software libraries and programs for the creation and editing of panoramic images. Its main function is to join multiple photographs of an area, taken from the same point of view in different directions, into one large panorama. Using several command-line applications like autopano-sift, nona and enblend it offers automatic control point recognition, lighting and color correction and many similar functions.

Some of these functions can be computationally demanding, so Hugin could benefit from the addition of a batch processor, an additional interface which would enable the user to prepare several projects for serial execution. A project to develop such an addition will be the subject of this proposal.

The original description of the idea can be found at: http://wiki.panotools.org/SoC_2008_ideas#Batch_processing


Originally I only had time to write one application, but when the deadline was extended, Hugin's team recommended that I should choose an additional project, maybe this one, especially since this is a higher priority project than my previous choice - utility for creating a philosphere. After examining the idea and having a discussion with the group I concluded that it would also be a goal I could achieve in the required time and at the same time present an interesting challenge which would provide some useful experience for my future career as a programmer.

Hugin is a GUI based on top of command-line applications, which means its own code presents only a part of its actual functionality as an application. Most processing is done by separate programs, which are executed by a single command with defined input/output files and multiple switches which specify options the user wishes to use. Most of these options can be selected in Hugin's GUI. When the user instructs the program to execute the processing, a Makefile is created which includes all the instructions needed to produce the final output files. Because the creation of panoramas usually consists of several stages besides the actual joining (or stitching) of images e.g. color correction, the Makefiles usually include intermediate goals in the form of temporary files, mostly images. These are necessary because of the use of different command-line programs for different operations. However, the user cannot control what is happening with these intermediate files if he/she is not proficient in the use of Makefiles. Another problem is that Hugin's image processing is not a fast process but neither a very long one. So if the user wishes to create several panoramas quick enough, he has to be near the computer all the time to specify the next project when one finishes.


The main goal of my project would be to develop a batch processor for Hugin which would address these problems. First of all I would include an additional tab in the existing GUI with a manageable list of image processing projects for execution. The basic use of Hugin would stay the same, except that the user would have an option to not execute the processing yet, but add the specified project to the batch instead. The project file and Makefile created by Hugin would be saved and then the user could change all the options and input files to a new project which he could again add to the batch and so on, until the list would include all desired projects which could then be serially executed by the program. Besides intermediate files, the Makefile produced by Hugin includes some so called 'phony' targets, like 'all' and 'clean'. They are standard parts of Makefiles, which could play an important role in the creation of a batch processor, as other targets are usually actual files created before or during the process. The program could automatically recognize different types of targets and offer to the user additional processing for individual input, output or intermediate images. This processing could be performed by using 'plugin' Makefiles, which could already be included with Hugin. An example of such a Makefile would be to create a QTVR - a Quick Time virtual reality cube - from an input panorama (instructions for making such a Makefile can be found at http://thread.gmane.org/gmane.comp.misc.ptx/8754). Since one Makefile can have quite a lot of intermediate files, this automatic target recognition approach could show to be a little awkward for the user to use if it should offer the greatest flexibility of extensible intermediate file processing. Maybe the program should only offer the redirection of input/output files between Makefiles. Which design would be the best would show as the project develops, but especially the latter version could be made easier to use by adding a graphic visual editor, where processes could be represented as nodes in a graph and the redirections of files between Makefiles as connections between them. However, in the first version of my program I would keep things simple and the user would just specify input and output filenames for each process in the batch, so they could somehow already be connected, but it would not be very robust, as simple spelling errors would make it work incorrectly. It would also be a good idea for this batch processor if batch configurations could be portable between different systems, but at the moment I do not know yet how different Hugin's project files are for different systems, so that would be an optional goal.


Pre-summer phase:

As I mentioned in my previous application, I would probably work less on the project in June, as it is the month of exams, but to compensate I would start working on the project as soon as possible before the official start of coding, so I would not miss any deadlines. By the official start of GSoC coding at the end of May, I should have a deeper understanding of Hugin's Makefile system and a clear plan of how my project should be completed. I even might have some code done by then (maybe the GUI extension).

Coding phase one:

By mid-term evaluation I plan to have completed the first part of making a GUI and a Makefile execution queue system, probably already with 'plugin' Makefile support and added options that Make offers like parallel execution of targets.

Coding phase two:

After mid-term evaluation if everything would go according to plan, I would try to develop additional functionality for my program, like the mentioned automatic target recognition and intermediate file processing. In August I should present the program to Hugin's team for testing and discovery of bugs, which would hopefully all be fixed until the end of the month.


I was born in 1985 in Trbovlje, a small town in Slovenia and I am currently a 4th year undergraduate student at the Faculty of Computer and Information Science, which is part of the University of Ljubljana, Slovenia. I believe I have enough experience in C/C++ programming to be able to work on this project. In my years of education I have also learned Java, Python and some Pascal, but outside of school I did not have much practice yet, which is mostly why I decided to apply for GSoC. My interest is in graphics, computer game programming and electronics, although I have not yet fully decided on my future professional orientation. Recently at school I started working with the AI lab on research of heuristics and search algorithms. Even though Hugin is an application for photographers, which I am not, I believe that developing a batch processor is a goal I could achieve without expert knowledge on camera technology. And from my discussion with Hugin's group it seems they agree with that conclusion.

Onur Kucuktunc, Extending Hugin's output options for stitching


Current version of Hugin has a limited output options. One of the problems is that some small errors after stitching process can be easily removed by using a graphics editor. Giving output support for the file formats these image manipulation/editing applications are using, will improve the usability.

Purpose of this project is to extend Hugin's currently offered output options with other compression techniques and give support for some useful formats, such as multilayered TIFF or PSD. Generation of flash-based (swf) hypertext documents (html) would be another way of extending output options, especially for web publishing.


As I mentioned in abstract, Hugin's output options are needed to be extended for some reasons. One problem, which is also stated in project idea [1], is that after stitching panoramas may have some ghost objects. For example, a walking person may appear in one scene, but s/he can disappear in the next one, resulting that moving object seems to be semi-transparent after blending. Ghosting can easily be removed if Hugin can output as a layered image format. Popular image manipulation/editing applications like Adobe Photoshop(r) [2] and its open-source alternative GIMP [3] can work on multilayered TIFF and PSD formats. Although specifications for multilayered PSD and PSB formats are not publicized, Panotools already has some code for exporting as PSD, and we may offer some more compression options. On the other hand, GIMP has some codes both for importing and exporting multilayer TIFF files, so it would be easier to support it. OpenRaster is another format which gives multi-layer support, but its specifications are not defined clearly right now. There is a draft specification in [4].

Besides giving support for raster graphics formats, exporting as a vector graphics, such as SVG, will definitely have some other advantages. Generation of swf, js and html seems to be another useful export extension of Hugin for the ones who needs web publishing.

To sum up, extending Hugin's output options has the following parts:

  • Multilayered TIFF
  • Multilayered PSD and PSB (if possible)
  • OpenRaster (with the given draft-specifications)
  • A vector graphics format, i.e. SVG
  • Swf, html, js (if needed) for web publishing

Time Schedule

I plan to invest at least 30h or more per week to this project. I will be on vacation for 4-5 days, and another week I will probably be busy with moving my house. But I will balance it by working hard in order not to fall behind schedule. My planned actions will be as follows:

  • Before May 26
    • Getting to know the developers, mentors, reading documentation,
    • Getting used to hugin/panotools, your coding strategy, bug reporting and fixing,
  • May 26 to July 7-14 (midterm evaluations)
    • Beginning documentation,
    • Preparing output libraries
    • Getting ready for mid-term evaluations
  • July 15 to August 11-18 or September 1 (final evaluations)
    • Completing documentations, preparing libraries, etc.
    • Getting ready for final evaluations


  • Output library for the proposed file formats,
  • Documentation created by using the doxygen tool,
  • Fully-commented source codes both for the library and Hugin application that uses the library.


I'm Onur Kucuktunc, a MSc student in computer engineering at Bilkent University, Turkey. I'm currently working on our Multimedia Database System (http://cs.bilkent.edu.tr/~bilmdg/), which includes lots of parts related to computer vision. I have taken Basics of Signals and Systems, Image Analysis, Computer Vision, and Pattern Recognition courses. I'm also a photographer. You can find detailed information about me in my website [5].

I'm mostly experienced in Java and Matlab programming; but I have also written low-level, operating system-related codes in C, and graphic codes (using OpenGL) in C++. As an OSX user, I also believe I will be able to report and fix bugs in hugin. I have some i18n and translation experience in open-source project, but it will be my first time to actually contribute as a developer.


  1. Extend hugin's output options for stitching, http://wiki.panotools.org/SoC_2008_ideas#Extend_hugin.27s_output_options_for_stitching
  2. Adobe Photoshop(r), http://www.adobe.com/products/photoshop/index.html
  3. GIMP - The GNU Image Manipulation Program, http://www.gimp.org/
  4. OpenRaster, http://create.freedesktop.org/specs/OpenRaster-draft.pdf
  5. Personal website, http://cs.bilkent.edu.tr/~onurk/

Tim Nugent, Support Vector Machine-based Sky Identification

  • I am a 2nd year PhD Student in the Bioinformatics group at University College London, UK.
  • I originally trained as a Pharmacologist, attaining a 2.1 BSc (Hons.) from the University of Bristol. I then completed an MRes (with Merit) in Bioinformatics at Birkbeck College, University of London, before working for a bioinformatics company Inpharmatica Ltd., and then as a bioinformatician for a genetics research project at Queen Mary, University of London. I returned to academia and am currently in my 2nd year of a 4 year PhD course in the bioinformatics group; my project concerns the structure prediction of transmembrane proteins using machine learning approaches (more info here: http://www.cs.ucl.ac.uk/staff/T.Nugent/research.html).
  • I have programmed in C/C++ for over two years and have recently implemented support vector machine (SVM) and dynamic programming algorithms in C++ into a protein structure prediction program. I have submitted various patches for Hugin, and am currently working on a game that used the SDL libraries.
  • I have a strong interest in photography, particularly panoramic and underwater. I am particularly keen to combine these two areas as there are very few examples where they overlap. I'm familiar with a range of stitching tools, but tend to use Hugin as I rarely work on anything other than a Linux box. Here are a few examples: http://www.cs.ucl.ac.uk/staff/T.Nugent/panoramics/
  • So far I have not used Hugin/Panotools for anything other than stitching panoramas. However, I'm keen to experiment with HDR photography.
  • I'm a developer for the Bioperl project (http://www.bioperl.org) which produces opensource Perl code for the bioinformatics community. You can download the development branch via SVN, which includes some of my modules for drawing graphics using libgd and SVG are available here. I intend to port these to Python for the increasingly popular BioPython project. http://www.cs.ucl.ac.uk/staff/T.Nugent/code.html
  • The reason I've chosen to tackle the sky identification problem is that I'd like to apply my knowledge of support vector machine (SVM) classification to a novel problem outside biology. Sky identification ties in perfectly with my interest in panoramic photography, and as someone who has had to manually remove control points from rogue clouds, I feel I'm the ideal person to attack this problem. SVM classifiers have been used in a range of areas from finance to face recognition and there is huge scope for applying these tools to new fields. However, training and cross validating SVMs can be computationally expensive. I'm fortunate to have access to a large Linux cluster at UCL running Sun Grid Engine, which I intend to use for this project (http://www.ucl.ac.uk/media/library/Dell).
  • For this implementation I intend to train an SVM classifier to automatically and accurately identify regions in an image that contain cloud, and those that do not, so that control points can be removed or excluded therefore increasing the quality of the alignment. This may also reduce the computation time for control point creation, by reducing the available regions within which to search for control points. This process could also be extended to other regions within an image that do not remain static over a series of photos, e.g. water, waves, other windswept regions etc. I will develop a standalone tool in C++ to score all regions of the image, and return a posterior probability of whether or not that region contains cloud. A scoring function and an adjustable threshold will be used to trim control points from associated Panotools/Hugin file, or create a mask with which to exclude control point searches. I also intend to produce a GUI based version using wxWidgets/GTK etc. The source code will be freely available, and I will provide binaries for all the major platforms.
  • As a full time PhD student, I do not have a summer break as undergraduates do, therefore am unable to contribute a 40 hour week to the project. I intend to use any free time I have inside or outside university hours, and envisage working on the project for most evening during the week and for a significant portion of the weekend. While this may not be ideal, I feel the expertise I already have will allow me to 'hit the ground running'. I am also able to start the project immediately so I am highly confident I can finish it by Google's 'pencils down' date. Should I encounter problems, I'm willing to take my annual leave in order to work full time on the project.
  • My preliminary plan for the project is as follows, beginning immediately:

1) Familiarise myself with:

  1. Hugin source code - begun, will continue for duration of the project.
  2. VIGRA image processing library - begun, will continue for duration of the project.
  3. libSVM/SVMlight source - already done to a large degree.
  4. FlickR API - begun.
  5. Feature/texture classification algorithms (e.g. Gabor filtering).

2) Assemble training and test sets of images containing/not containing cloud, probably from community image sites such as Flickr, using FlickR API/Perl scripts. This will be ongoing in order to obtain maximum diversity, but in the first instance it should take approximately two weeks to collect and then accurately classify enough images (and the regions within them) to begin SVM training.

3) Use VIGRA, and perhaps other approaches, to extract image features from all regions (e.g. RGB data, smoothness, edges etc - motifs common to cloud textures). Add additional global features, for example the (x,y) co-ordinates of the region within the image). Finding the features that accurately discriminate between the two sets may not be straightforward so I envisage this will be the hardest part of the project and may take 2 months.

4) Use these features to train an SVM classifier. Optimise SVM parameters (possibly using genetic algorithm based grid search), cross-validate, re-train, optimise (and repeat..). Use UCL Linux cluster to run many SVM training jobs in parallel. This stage will overlap with (3) to a large degree, finishing two weeks afterwards.

5) Write standalone tool in C++ to score all regions of the image, and return a posterior probability of whether or not that region contains cloud. Create a scoring function which analyses the results globaly - e.g. Score up 'cloud' regions that are in contact with a large number of other 'cloud' regions, and score down those that are not. Use an adjustable threshold to trim control points from associated Panotools/Hugin file. This should take about 3 weeks.

6) Write a GUI based version using wxWidgets/GTK etc. This should take about two weeks. I see this as the least important component of the project as I envisage the tool being run from the command line, or via Hugin, so should other stages of the project fall behind schedule, this part would be sacrificed.

7) Testing, both code (across multiple platforms) and prediction accuracy. This will take 2 weeks but will begin during (5).