Hi, this is my page. Not much here except a /Sandbox.
2011 GSoC Proposal
Short Description Currently, control points between pairs of photographs must be added by users manually or "guessed" using a sifting algorithm. This project aims at a sort of hybrid for straight lines. The user draws straight lines on a pair of images, and the computer then automatically adds and refines control points along the given path. Such a method will allow quick and simple alignment of photographs with straight lines in them.
My name is Steven Williams and I am in my fourth year studying undergraduate mathematics at Walla Walla University. I have taken classes in data structures using C++. I have been an amateur photographer for about six years and own a Canon Powershot A620 and Canon Digital Rebel XTi. The only special equipment I have is a Manfrotto ball head tripod, but I shoot mostly handheld. Some of my photographs, including panoramas I have made using Hugin, can be found on Flickr and Panoramio. I also use Hugin to align images for exposure blending using qtpfsgui or enblend/enfuse.
I have been writing in C++ for three years. I became interested in programming in high school, and wrote a Mastermind clone in TI-Basic on my TI-89. I have a little experience with C# and recently Processing. My current primary computer is a homebuilt quad-core desktop running Windows 7 with Ubuntu in a virtual machine.
Recently I wrote a program to visualise and interact with 2D IFS fractals as part of a senior seminar project. It is on Git here .
My only previous interaction with the Hugin project is as a user. I have been using Hugin for several years to stitch panoramas.
For this project I would like to take up the Straight Line UI proposal from here . This is also a blueprint  on Launchpad. I have had some experience from my IFS program working with user input and object manipulation and am excited to develop a new tool for Hugin that will enable users to match their photographs quickly and easily.
I propose a system that will take a line drawn by the user in the quick preview window, and using a corner detection algorithm "snap" it to the nearest edge detected for more precise straight line input. Once this is done for the same line in matching images, the program will scan points along this line and automatically match control points in each image.
I plan on spending at least 40 hours per week on this project. University graduation happens around June 10, so I may not be able to spend as much time during the two or three days around that weekend. Here is a (very) tentative schedule:
May 24-June 3 Finalize inputs/outputs for new straight line tool, get
June 6-17 Write line input methods for user interaction and rough algorithm for guessing/sifting control points on the given lines.
June 20-July 1 Implement edge detection for given lines, test on lots of images
July 4-8 Clean up and test for mid-term evaluation
July 18-29 Refine control point guessing procedure
August 1-12 Extra week for bugs/slow progress/etc.
August 15-19 Finalize/improve documentation