http://wiki.panotools.org/api.php?action=feedcontributions&user=Dg&feedformat=atomPanoTools.org Wiki - User contributions [en]2015-11-29T12:21:08ZUser contributionsMediaWiki 1.23.6http://wiki.panotools.org/User:DgUser:Dg2009-06-23T07:37:05Z<p>Dg: /* Week 4: Results of MATLAB Model */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.<br />
<br />
=Week 2: Understanding Panotools=<br />
This week, I've been going through filter.h and adjust.c, the two files that contain much of the code to setup transformations. I have added extensive comments to note my understanding of the code and uploaded the results to my branch. Between this detailed reading and review of some older communication on the hugin-ptx list,<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/c01fb82c91c7e502/d88a617e1860b691<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/cd5806ebee2b09fa/46d07881d7f55adf#46d07881d7f55adf<br />
<br />
I feel I have a much better understanding of the current model in panotools. I am working on a list of questions I plan to send to the mailing list over the weekend. Hopefully the answers will clear up the points that I'm still confused on.<br />
<br />
From my talk with Daniel this week, I realized my most immediate task is to nail down which additional parameters will describe views captured from non-overhead viewpoints best. While Pablo's suggestion of having an x,y,z coordinate for each image is possible, I have been considering alternate descriptions such as the tilt and slant and distance of an image from a non-overhead view. I need to decide how to precisely define these parameters and which set will best model our system.<br />
<br />
<br />
=Week 3: MATLAB Modelling=<br />
This past week, I have started developing and testing my geometric model for remapping in MATLAB. Specifically, I have described the position of the plane with the parameters LookAtPoint, tilt, slant, and spin as follows:<br />
<br />
* LookAtPoint is the point where a ray exiting the camera's center of projection intersects the image plane.<br />
* tilt is a rotation about the x-axis centered at the LookAtPoint.<br />
* slant is a rotation about the y-axis centered at the LookAtPoint.<br />
* spin is a rotation about the LookAtPoint.<br />
<br />
Because the image plane is located some distance from the center of projection, it's not easy to rotate about the x- or y-axis while centered at the LookAtPoint. So to implement the slant and tilt with straightforward matrix transformations, I have described them as a series of rotations and translations of the image plane.<br />
<br />
The next step is to test if this series of transformations effectively causes images taken from what I called "Physical Camera 2" in the diagram[http://wiki.panotools.org/File:Diagram.png] to appear as if they are viewed from the "Anchor Camera". I have a checkerboard surface that I plan to use as a sample planar object. MATLAB has image transformation functions that will allow me to easily test and view the effects of the image transformations I develop.<br />
<br />
I plan to create diagrams illustrating this approach, but they are quite time-consuming to draw in Illustrator, so at the moment, my time is best spent coding and going off of hand-drawn sketches.<br />
<br />
=Week 4: Results of MATLAB Model=<br />
I've been testing a model in MATLAB for straightening slanted or tilted images. I have parameterized these geometric transformations as angular rotations about the y-axis for slant and x-axis for tilt.<br />
<br />
As the attached document shows, simple matrix transformations straighten the slanted image plane so it lies parallel to the x-y plane. As the MATLAB code uploaded to my branch shows, this process is reversible when run on a set of points forming the corners of a square (and its center). The recovered points are exactly equal to the original points. (See Figure 1, if you run the mosaic_v11.m script)<br />
<br />
I will continue with these tests, exploring how capable this approach is when correcting combinations of slants and tilts. Next I will conduct further tests on images. If successful, I will begin to place function prototypes for these transformations in my branch with the intermediate goal of creating the functions necessary to slant a panorama in Panotools by adjusting a command line parameter.<br />
<br />
http://wiki.panotools.org/File:Slant_example.jpg</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-23T07:32:17Z<p>Dg: /* Week 4: Results of MATLAB Model */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.<br />
<br />
=Week 2: Understanding Panotools=<br />
This week, I've been going through filter.h and adjust.c, the two files that contain much of the code to setup transformations. I have added extensive comments to note my understanding of the code and uploaded the results to my branch. Between this detailed reading and review of some older communication on the hugin-ptx list,<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/c01fb82c91c7e502/d88a617e1860b691<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/cd5806ebee2b09fa/46d07881d7f55adf#46d07881d7f55adf<br />
<br />
I feel I have a much better understanding of the current model in panotools. I am working on a list of questions I plan to send to the mailing list over the weekend. Hopefully the answers will clear up the points that I'm still confused on.<br />
<br />
From my talk with Daniel this week, I realized my most immediate task is to nail down which additional parameters will describe views captured from non-overhead viewpoints best. While Pablo's suggestion of having an x,y,z coordinate for each image is possible, I have been considering alternate descriptions such as the tilt and slant and distance of an image from a non-overhead view. I need to decide how to precisely define these parameters and which set will best model our system.<br />
<br />
<br />
=Week 3: MATLAB Modelling=<br />
This past week, I have started developing and testing my geometric model for remapping in MATLAB. Specifically, I have described the position of the plane with the parameters LookAtPoint, tilt, slant, and spin as follows:<br />
<br />
* LookAtPoint is the point where a ray exiting the camera's center of projection intersects the image plane.<br />
* tilt is a rotation about the x-axis centered at the LookAtPoint.<br />
* slant is a rotation about the y-axis centered at the LookAtPoint.<br />
* spin is a rotation about the LookAtPoint.<br />
<br />
Because the image plane is located some distance from the center of projection, it's not easy to rotate about the x- or y-axis while centered at the LookAtPoint. So to implement the slant and tilt with straightforward matrix transformations, I have described them as a series of rotations and translations of the image plane.<br />
<br />
The next step is to test if this series of transformations effectively causes images taken from what I called "Physical Camera 2" in the diagram[http://wiki.panotools.org/File:Diagram.png] to appear as if they are viewed from the "Anchor Camera". I have a checkerboard surface that I plan to use as a sample planar object. MATLAB has image transformation functions that will allow me to easily test and view the effects of the image transformations I develop.<br />
<br />
I plan to create diagrams illustrating this approach, but they are quite time-consuming to draw in Illustrator, so at the moment, my time is best spent coding and going off of hand-drawn sketches.<br />
<br />
=Week 4: Results of MATLAB Model=<br />
I've been testing a model in MATLAB for straightening slanted or tilted images. I have parameterized these geometric transformations as angular rotations about the y-axis for slant and x-axis for tilt.<br />
<br />
As the attached document shows, simple matrix transformations straighten the slanted image plane so it lies parallel to the x-y plane. As the MATLAB code uploaded to my branch shows, this process is reversible when run on a set of points forming the corners of a square (and its center). The recovered points are exactly equal to the original points. (See Figure 1, if you run the mosaic_v11.m script)<br />
<br />
I will continue with these tests, exploring how capable this approach is when correcting combinations of slants and tilts. Next I will conduct further tests on images. If successful, I will begin to place function prototypes for these transformations in my branch with the intermediate goal of creating the functions necessary to slant a panorama in Panotools by adjusting a command line parameter.<br />
<br />
[[File:Slant example.jpg]]</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-23T07:16:48Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.<br />
<br />
=Week 2: Understanding Panotools=<br />
This week, I've been going through filter.h and adjust.c, the two files that contain much of the code to setup transformations. I have added extensive comments to note my understanding of the code and uploaded the results to my branch. Between this detailed reading and review of some older communication on the hugin-ptx list,<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/c01fb82c91c7e502/d88a617e1860b691<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/cd5806ebee2b09fa/46d07881d7f55adf#46d07881d7f55adf<br />
<br />
I feel I have a much better understanding of the current model in panotools. I am working on a list of questions I plan to send to the mailing list over the weekend. Hopefully the answers will clear up the points that I'm still confused on.<br />
<br />
From my talk with Daniel this week, I realized my most immediate task is to nail down which additional parameters will describe views captured from non-overhead viewpoints best. While Pablo's suggestion of having an x,y,z coordinate for each image is possible, I have been considering alternate descriptions such as the tilt and slant and distance of an image from a non-overhead view. I need to decide how to precisely define these parameters and which set will best model our system.<br />
<br />
<br />
=Week 3: MATLAB Modelling=<br />
This past week, I have started developing and testing my geometric model for remapping in MATLAB. Specifically, I have described the position of the plane with the parameters LookAtPoint, tilt, slant, and spin as follows:<br />
<br />
* LookAtPoint is the point where a ray exiting the camera's center of projection intersects the image plane.<br />
* tilt is a rotation about the x-axis centered at the LookAtPoint.<br />
* slant is a rotation about the y-axis centered at the LookAtPoint.<br />
* spin is a rotation about the LookAtPoint.<br />
<br />
Because the image plane is located some distance from the center of projection, it's not easy to rotate about the x- or y-axis while centered at the LookAtPoint. So to implement the slant and tilt with straightforward matrix transformations, I have described them as a series of rotations and translations of the image plane.<br />
<br />
The next step is to test if this series of transformations effectively causes images taken from what I called "Physical Camera 2" in the diagram[http://wiki.panotools.org/File:Diagram.png] to appear as if they are viewed from the "Anchor Camera". I have a checkerboard surface that I plan to use as a sample planar object. MATLAB has image transformation functions that will allow me to easily test and view the effects of the image transformations I develop.<br />
<br />
I plan to create diagrams illustrating this approach, but they are quite time-consuming to draw in Illustrator, so at the moment, my time is best spent coding and going off of hand-drawn sketches.<br />
<br />
=Week 4: Results of MATLAB Model=<br />
[[File:Slant example.jpg]]</div>Dghttp://wiki.panotools.org/File:Slant_example.jpgFile:Slant example.jpg2009-06-23T07:06:27Z<p>Dg: </p>
<hr />
<div></div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-17T06:54:18Z<p>Dg: /* Week 3: MATLAB Modelling */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.<br />
<br />
=Week 2: Understanding Panotools=<br />
This week, I've been going through filter.h and adjust.c, the two files that contain much of the code to setup transformations. I have added extensive comments to note my understanding of the code and uploaded the results to my branch. Between this detailed reading and review of some older communication on the hugin-ptx list,<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/c01fb82c91c7e502/d88a617e1860b691<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/cd5806ebee2b09fa/46d07881d7f55adf#46d07881d7f55adf<br />
<br />
I feel I have a much better understanding of the current model in panotools. I am working on a list of questions I plan to send to the mailing list over the weekend. Hopefully the answers will clear up the points that I'm still confused on.<br />
<br />
From my talk with Daniel this week, I realized my most immediate task is to nail down which additional parameters will describe views captured from non-overhead viewpoints best. While Pablo's suggestion of having an x,y,z coordinate for each image is possible, I have been considering alternate descriptions such as the tilt and slant and distance of an image from a non-overhead view. I need to decide how to precisely define these parameters and which set will best model our system.<br />
<br />
<br />
=Week 3: MATLAB Modelling=<br />
This past week, I have started developing and testing my geometric model for remapping in MATLAB. Specifically, I have described the position of the plane with the parameters LookAtPoint, tilt, slant, and spin as follows:<br />
<br />
* LookAtPoint is the point where a ray exiting the camera's center of projection intersects the image plane.<br />
* tilt is a rotation about the x-axis centered at the LookAtPoint.<br />
* slant is a rotation about the y-axis centered at the LookAtPoint.<br />
* spin is a rotation about the LookAtPoint.<br />
<br />
Because the image plane is located some distance from the center of projection, it's not easy to rotate about the x- or y-axis while centered at the LookAtPoint. So to implement the slant and tilt with straightforward matrix transformations, I have described them as a series of rotations and translations of the image plane.<br />
<br />
The next step is to test if this series of transformations effectively causes images taken from what I called "Physical Camera 2" in the diagram[http://wiki.panotools.org/File:Diagram.png] to appear as if they are viewed from the "Anchor Camera". I have a checkerboard surface that I plan to use as a sample planar object. MATLAB has image transformation functions that will allow me to easily test and view the effects of the image transformations I develop.<br />
<br />
I plan to create diagrams illustrating this approach, but they are quite time-consuming to draw in Illustrator, so at the moment, my time is best spent coding and going off of hand-drawn sketches.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-16T23:47:29Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.<br />
<br />
=Week 2: Understanding Panotools=<br />
This week, I've been going through filter.h and adjust.c, the two files that contain much of the code to setup transformations. I have added extensive comments to note my understanding of the code and uploaded the results to my branch. Between this detailed reading and review of some older communication on the hugin-ptx list,<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/c01fb82c91c7e502/d88a617e1860b691<br />
<br />
http://groups.google.com/group/hugin-ptx/browse_thread/thread/cd5806ebee2b09fa/46d07881d7f55adf#46d07881d7f55adf<br />
<br />
I feel I have a much better understanding of the current model in panotools. I am working on a list of questions I plan to send to the mailing list over the weekend. Hopefully the answers will clear up the points that I'm still confused on.<br />
<br />
From my talk with Daniel this week, I realized my most immediate task is to nail down which additional parameters will describe views captured from non-overhead viewpoints best. While Pablo's suggestion of having an x,y,z coordinate for each image is possible, I have been considering alternate descriptions such as the tilt and slant and distance of an image from a non-overhead view. I need to decide how to precisely define these parameters and which set will best model our system.<br />
<br />
<br />
=Week 3: MATLAB Modelling=<br />
This past week, I have started developing and testing my geometric model for remapping in MATLAB. Specifically, I have described the position of the plane with the parameters LookAtPoint, tilt, slant, and spin as follows:<br />
<br />
* LookAtPoint is the point where a ray exiting the camera's center of projection intersects the image plane.<br />
* tilt is a rotation about the x-axis centered at the LookAtPoint.<br />
* slant is a rotation about the y-axis centered at the LookAtPoint.<br />
* spin is a rotation about the LookAtPoint.<br />
<br />
Because the image plane is located some distance from the center of projection, it's not easy to rotate about the x- or y-axis while centered at the LookAtPoint. So to implement the slant and tilt with straightforward matrix transformations, I have described them as a series of rotations and translations of the image plane.<br />
<br />
The next step is to test if this series of transformations effectively causes images taken from what I called "Physical Camera 2" in the [diagram[http://wiki.panotools.org/File:Diagram.png]] to appear as if they are viewed from the "Anchor Camera". I have a checkerboard surface that I plan to use as a sample planar object. MATLAB has image transformation functions that will allow me to easily test and view the effects of the image transformations I develop.<br />
<br />
I plan to create diagrams illustrating this approach, but they are quite time-consuming to draw in Illustrator, so at the moment, my time is best spent coding and going off of hand-drawn sketches.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T11:07:39Z<p>Dg: /* Week 1: Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.<br />
<br />
Next, the additional parameters added will be optimized. It is unknown whether this optimization will be successful or stable at this point.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:59:07Z<p>Dg: /* Week 1: Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, p(x,y), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:56:49Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
* End of Proposal<br />
<br />
<br />
=Week 1: Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:56:05Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
********End of Proposal<br />
<br />
<br />
=Initial Approach=<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:53:41Z<p>Dg: /* Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a relation between angles on the panosphere and pixels on the plane .<br />
<br />
* We must map from the plane to Physical Camera 2. This gives us a relation between pixels on the plane and the position of Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:50:29Z<p>Dg: /* Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface but it is located at (0,0,0) and is assumed to be 1 distance unit from the plane.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This gives us a correspondence between pixels on the plane to angles on the panosphere.<br />
<br />
* We must map from the plane to Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-06-01T09:48:26Z<p>Dg: /* Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface.)<br />
<br />
Creating the output image takes two steps:<br />
* We must map from the Anchor Camera onto the plane. This will tell us which pixels on the plane correspond to angles on the panosphere.<br />
<br />
* We must map from the plane to Physical Camera 2.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-05-31T22:35:09Z<p>Dg: /* Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual (output) Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image in (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane. The location of the plane will be described by the yaw and pitch of a normal to the planar surface. (Note that Physical Camera 1/Anchor Camera is not necessarily perpendicular to the planar surface.)<br />
<br />
Creating the output image will have two steps:<br />
* We must map from the Anchor Camera onto the plane. This will tell us which pixels</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-05-31T22:23:33Z<p>Dg: /* Initial Approach */</p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual Anchor Camera, coincident with Physical Camera 1. <br />
<br />
[[File:Diagram.png]]<br />
<br />
To do this, we propose to add a position for each image in (x,y,z). All images with non-zero (x,y,z) coordinates are projected to the plane.</div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-05-31T22:00:50Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
Below, we see a top view of the planar object we wish to image. The circle represents the panosphere and the triangles represent the viewing frusta of two cameras. We wish to project points on the planar object, (p(x,y)), as viewed from Physical Camera 2 to a virtual Anchor Camera, coincident with Physical Camera 1. To do this, we must <br />
<br />
[[File:Diagram.png]]</div>Dghttp://wiki.panotools.org/File:Diagram.pngFile:Diagram.png2009-05-31T21:47:55Z<p>Dg: uploaded a new version of "File:Diagram.png"</p>
<hr />
<div></div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-05-30T22:22:34Z<p>Dg: </p>
<hr />
<div>=SoC2009 Dev Ghosh=<br />
<br />
==Abstract==<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
==Details==<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
==Project Timeline==<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
==Deliverables==<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
==Biography==<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
==References==<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.<br />
<br />
<br />
==Initial Approach==<br />
The top view below shows the planar object we wish to image. The circle represents the panosphere.<br />
<br />
[[File:Diagram.png]]</div>Dghttp://wiki.panotools.org/File:Diagram.pngFile:Diagram.png2009-05-30T22:22:02Z<p>Dg: uploaded a new version of "File:Diagram.png"</p>
<hr />
<div></div>Dghttp://wiki.panotools.org/File:Diagram.pngFile:Diagram.png2009-05-30T20:21:35Z<p>Dg: </p>
<hr />
<div></div>Dghttp://wiki.panotools.org/User:DgUser:Dg2009-04-04T08:40:13Z<p>Dg: Created page with ''''Abstract''' Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a...'</p>
<hr />
<div>'''Abstract'''<br />
<br />
Hugin creates wide-angle photo panoramas taken from a single camera position, one of the most popular applications of stitching software. However, when viewing a flat object such as a painting, photographs taken from different, casually chosen viewpoints must be combined to create a large high-resolution image mosaic. But Hugin can’t assemble photo mosaics of large flat objects photographed from many different camera locations. While tweaking Hugin’s settings can provide an approximate result, the underlying Panotools imaging model does not include a projective homography integrated with lens distortion correction. The optimize-able imaging parameters are unable to fully rectify and align the tiles of a mosaic image. This results in mosaics with warped edges and significant distortions. The process requires awkward user interactions as shown in [1], a tutorial by Joachim Fenkes titled “Creating linear panoramas with Hugin.” Fenkes uses horizontal image features such as utility boxes aligned at the same height along the graffiti covered wall to straighten the snaking panorama. I propose to add a new “mosaic mode” to Panotools. This will introduce a new image model based on multiple centers of projection to generate high-resolution, distortion-free images of flat objects captured with either handheld or tripod-mounted cameras.<br />
<br />
[1] http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
<br />
'''Details'''<br />
<br />
Problem<br />
<br />
Users of Hugin have developed methods for stitching 2D scanned objects and stitching linear panoramas. For best results, these methods rely on matching line constraints (control points of type horizontal or vertical line). But on images lacking clear horizontal or vertical features, when only manually placed or SIFT-generated control points are available, such approaches result in poorly aligned mosaics.<br />
<br />
Solution<br />
<br />
The Panotools imaging model optimizes eight parameters to correct lens distortion and geometric alignment. However, this model is unable to describe warps between overlapping images of a planar object taken from different viewpoints. While leaving the existing lens distortion parameters in place, I propose to develop an extended geometric model describing the relationship between a orthographic view of a planar object with a perspective view of a subsection of the object. This model will be optimized using the existing Levenberg-Marquardt optimizer.<br />
The optimized parameters will allow us to project each view of a subsection of the object as if it were viewed with a large, high-resolution, orthographic camera or with a perspective camera from a distance sufficient to view the whole. This approach will also allow views of the object taken from low angles to be projected and mapped to the orthographic view.<br />
<br />
Performance Measures<br />
<br />
The new geometric model will be tested on synthetic image data and overlapping views of a large painting in the collections at the Art Institute of Chicago. The results will be compared to those obtained using Panotools’s current model by average and worst control point distance and by image differencing in overlap regions.<br />
<br />
'''Project Timeline'''<br />
<br />
I plan to spend at least 40 hours a week dedicated to this project. I have already begun to explore and make modifications to the source code. This head start will allow me to work through the months of April and May and continue through the end of Summer of Code. These are the steps I plan to take:<br />
<br />
• Study current model for projection of images in libpano13 (prior to May 23)<br />
<br />
• Research existing models for viewing images photographed from different viewpoints including “Gold Standard” Direct Linear Transform algorithm and Szeliski’s Image Alignment and Stitching: A Tutorial (prior to May 23) <br />
<br />
• Develop geometric model for viewing of mosaic (two weeks)<br />
<br />
• Implement new model in Panotools (five weeks)<br />
<br />
• Test model on synthetic and real image data sets (two weeks)<br />
<br />
• Evaluate performance by comparing alignment attempts with existing model (in average control point error) (two weeks)<br />
<br />
• Wrap up and complete documentation (one week)<br />
<br />
'''Deliverables'''<br />
<br />
• Documentation describing geometric model<br />
<br />
• A library implementing the model<br />
<br />
• Results of tests of library<br />
<br />
• Fully-commented source code<br />
<br />
'''Biography'''<br />
<br />
I have been experimenting with Hugin since Summer 2008 and started to build my own version and modify the source code in December 2008. I built Hugin both on Windows and on Linux under Ubuntu. My interest is in using Hugin to assemble large, high-resolution mosaics of paintings photographed from different viewpoints with 50% or more overlap. <br />
<br />
I am a PhD candidate in Electrical Engineering and Computer Science at Northwestern University, Evanston, Illinois. I have coding experience in Python, C/C++, and MATLAB. My background is in video, signal, and image processing and my previous projects have included making improvements to the Joint Scalable Video Model, a scalable version of the H.264 video codec written in C++. I enjoy taking pictures and solving problems and participating in Google’s Summer of Code will allow me to work solely on this project over the summer without other distractions.<br />
<br />
'''References'''<br />
<br />
• http://hugin.sourceforge.net/tutorials/scans/en.shtml<br />
<br />
• http://www.dojoe.net/tutorials/linear-pano/<br />
<br />
• Zhang, Z. and He, L.-W. (2007). Whiteboard scanning and image enhancement. Digital Signal Processing, 17(2), 414–432.<br />
<br />
• http://www.ics.forth.gr/~lourakis/homest/<br />
<br />
• http://grail.cs.washington.edu/projects/multipano/<br />
<br />
• Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.<br />
<br />
• Richard Szeliski (2006) Image alignment and stitching: a tutorial. Now Publishers Inc.</div>Dghttp://wiki.panotools.org/SoC_2009_student_proposalsSoC 2009 student proposals2009-04-04T08:26:22Z<p>Dg: </p>
<hr />
<div>Student Proposals: Student info and short project synopsis, with link to a new Wiki page where the project is expanded in full detail. See template below:<br />
<br />
== MO Faruque Sarker: Handling very large images using VIPS ==<br />
<br />
* A second year PhD student at U of Wales, Newport under Robotic Intelligence Lab (my homepage at: http://ril.newport.ac.uk/sarker/index.php). <br />
* Coding Platform: Ubuntu 8.10 x64, Pentium 4 3GHz, 32GB RAM<br />
* I'm interested to work in processing large images in Hugin. We have got a 16 mega-pixel Prosilica GE4900C CCD camera for tracking about 30 mobile robots. It gives about 50MB image file per frame using a huge memory. I'm running my tracking algorithm on Ubuntu 8.10 x64 and got some nice results of tracking markers (http://ril.newport.ac.uk/sarker/index.php?pid=21).<br />
<br />
[[SoC2009_MOFaruque_Sarker | Handling very large images using VIPS]]<br />
<br />
== Lukáš Jirkovský ==<br />
<br />
* first year undergrad of University of West Bohemia<br />
* Platform used for coding: Arch Linux, Pentium 4 Northwood 2GHz, 1.5 GB RAM<br />
* Coding skills: C++, PHP, Java, Matlab<br />
* [[SoC2009_Lukas_Jirkovsky | Ghost removal for enfuse]]<br />
* http://socghop.appspot.com/student_proposal/show/google/gsoc2009/stativ/t123797116006 Project proposal in Google's webapp<br />
<br />
== [[User:Leonox|León Moctezuma]]: QuickTimeVR Playback in VLC ==<br />
<br />
* Enrolled last Semester Bachelor of Computer Science at [www.buap.mx Benemérita Universidad Autónoma de Puebla].<br />
* Coding Platform: Ubuntu 8.10<br />
* [http://wiki.videolan.org/SoC_2009/QuickTimeVR_Playback Project proposal] on the VideoLAN Wiki<br />
* Mentor (VLC): Most likely Antoine Cellier.<br />
* Comentor (FreePV): Yuval Levy. Note: we're discussing this with VLC / Jean-Baptiste Kempf, including Wiimote navigation.<br />
* http://socghop.appspot.com/student_proposal/review/google/gsoc2009/leonox/t123831053541 Google webapp proposal<br />
<br />
== Dev Ghosh: Mosaic Mode for Hugin/Panotools==<br />
<br />
* PhD candidate, Electrical Engineering and Computer Science, Northwestern University, Evanston, Illinois, USA<br />
* Coding Platform: Ubuntu 8.10, AMD Athlon 64 3500+, 2 GB RAM, Eclipse <br />
* Coding skills: C/C++, MATLAB, Python<br />
* Possible Mentor: Daniel German<br />
* [[SoC2009_Dev_Ghosh | Mosaic Mode for Hugin/Panotools]]<br />
* Google webapp Proposal Link: (Incomplete and can't edit right now. Please see wiki link above.) http://socghop.appspot.com/student_proposal/show/google/gsoc2009/dkg/t123865551999 <br />
<br />
== [[User:Albiorix|Yulia Kotseruba]]: Straight-line detection for automated lens calibration ==<br />
<br />
* 4th year Computer Science student at University of Toronto studying Artificial Intelligence<br />
* Coding Platform: Mac OS 10.5.6, 2.53 GHz Intel Core 2 Duo, 4 GB RAM, XCode, Eclipse<br />
* Coding skills: C/C++, Java, Python, Matlab, OpenGL, Prolog<br />
* [[SoC2009_Yulia_Kotseruba | Straight-line detection for automated lens calibration]]<br />
* http://socghop.appspot.com/student_proposal/show/google/gsoc2009/albiorix/t123854228105 Google proposal<br />
<br />
== Mokhtar M. Khorshid: Accounting for Camera Movements ==<br />
<br />
* Holder of B.S. in Computer Science & Mathematics from AUC and starting my master studies in Computer Science.<br />
* Coding Platform: Vista 64-bit, Quad Core 2.4 GHz, 6GB RAM, 8800 GTS graphics card, Visual Studio 2005/2008 .NET.<br />
* Coding skills: Expert in C++ (10+ years), seasoned algorithm designer, I have experience with Computer graphics, OpenGL, and wxWidgets.<br />
* [[SoC2009_Mokhtar_Khorshid | Accounting for Camera Movements]]<br />
<br />
<br />
== Tim Nugent ==<br />
<br />
* 3rd Year PhD student in Bioinformatics at University College London, UK<br />
* Ubuntu and Centos Linux, with XP imprisoned in Vmware <br />
* Coding skills: C/C++, Perl<br />
* [http://socghop.appspot.com/student_proposal/show/google/gsoc2009/ultrawide/t123859124187 Straight-line detection for automated lens calibration]<br />
* [http://socghop.appspot.com/student_proposal/show/google/gsoc2009/ultrawide/t123859431227 Bracketing Panorama Model]<br />
<br />
== Seth Berrier: Simple Masking & Bracketed/HDR Exposure Stacks ==<br />
(Sorry for the last minute addition. Hope I can find room among this qualified bunch!)<br />
<br />
* 6th Year PhD student & Doctoral Candidate in Computer Science at University of Minnestoa, USA<br />
* Main Coding Platform: Mac OS X 10.5 w/ Apple GNU Toolchain & Eclipse IDE, 15" MacBook Pro, 2.4 GHz Core 2 Duo, 4G mem, NVIDIA GeForce 8600M GT<br />
* Other Platforms:<br />
** Win XP Pro w/ Visual C++ & VS .net, 2k5 or 2k8 OR MinGW & Eclipse IDE<br />
** Other *nix platforms (Ubuntu & Solaris in particular) using basic GNU toolchain from shell<br />
* Computer Science Knowledge:<br />
** Veteran in C++ (with a bit of C#, Java, Perl, LISP, StandardML, Pascal, Fortran, Basic, VB for Apps, HTML, Javascript)<br />
** Very experienced in OpenGL, GLUT, Cg and general graphics algorithms<br />
** Basic knowledge of Qt GUI programming & Qt Eclipse integration<br />
** Experienced with Matlab and some LabVIEW<br />
** Good Numerical Methods foundation with some practical experience<br />
** Basic, non-theoretical experience in real-time image processing for computer vision (Hough transform, Canny edge detection, etc)<br />
* [[SoC2009_Seth_Berrier | Simple Masking & Bracketed/HDR Exposure Stacks]]<br />
* [[SoC2009_Seth_Berrier | Straight-line detection for automated lens calibration]]<br />
<br />
== James Legg: Enblend / Enfuse Gimp plugin ==<br />
<br />
* I'm in the third and final year of a Computer Science and Mathmatics BSc at the University of York, in the United Kingdom.<br />
* Coding Platform: Ubuntu 8.10, Pentium dual-core 2GHz, 2GB RAM<br />
* The [[SoC2009_James_Legg | Enfuse / Enblend Gimp plugin]] will allow a user of the Gimp to use the Enblend or Enfuse algorithms to merge layers of an image through the Gimp's menu or using a Gimp script.<br />
<br />
<br />
== Sumit Sinha ==<br />
<br />
* 2rd Year student pursuing a Bachelors degree in Computer Science at Indian Institute of Technology, INDIA<br />
* Coding Platform: Vista 32-bit, Dual Core 1.6 GHz, 1GB RAM, Visual Studio 2005/2008 .NET.<br />
* Coding skills: C/C++, JAVA<br />
* Better Algorithm for Seam Optimization in Enblend/Enfuse<br />
* Hugin RAW support<br />
<br />
<br />
== Joe Templeman ==<br />
<br />
* Studying at Imperial College London, 2nd Year Computing MEng student<br />
Coding:<br />
* Main coding platform: Ubuntu 8.04: 2.66Ghz C2D E6750, 4GB RAM, Eclipse, GDB etc<br />
** Alternative platforms include Windows XP, Dell D430, 1.2Ghz C2D ULV, 2GB RAM, Visual Studio 2005/2008, Eclipse<br />
* Experienced C and Java, also C++, Haskell, Prolog<br />
* Strong mathematical and theortical grounding including<br />
** Logical/Inductive reasoning about programs<br />
** Formal specifications<br />
** Software engineering design patterns<br />
<br />
Photography Skills:<br />
* Nikon D50, D1H, 8mm Peleng fisheye, 15-30mm Sigma <br />
* I’ve not been very active recently but I invested a lot of time into Hugin and panoramas in general and hosted a load on Flickr. <br />
** I wrote quite a widely used [http://www.flickr.com/photos/jftphotography/sets/72157600232700728/ tutorial]<br />
** Feel free to check out my Panoramas etc, all using Hugin [http://www.flickr.com/photos/jftphotography/sets/72157600108689306/ here].<br />
* I’ve also worked as a part time post-production photography consultant specialising in photo-stitching, specifically Hugin.<br />
** http://www.bennorthover.com/folio4.html (The stereographic projections I assisted on are 3,4 and 5th and published as an advert in the economist)<br />
<br />
Project:<br />
* [[User:Joetempleman|Mask editing built into hugin, moving on to a enblend plugin for gimp for further and more specific mask editing.]]<br />
* [http://socghop.appspot.com/student_proposal/show/google/gsoc2009/joetempleman googleapp proposal]<br />
<br />
== Achin Agarwal ==<br />
<br />
* Second year undergraduate student of Computer Science and Engineering at the Indian Institute of Technology, Kharagpur, India.<br />
* Coding Platform: Intel Core 2 Duo, 2.53 GHz, 512 MB graphics card NVIDIA chipset2250, Windows XP, Visual Studio 2005/2008.<br />
* Coding skills: C, C++, Java, OpenGL, MATLAB.<br />
* Utility for creating a Philosphere.<br />
<br />
== Satyajeet Singh ==<br />
* Final year undergraduate student at Netaji Subhas Institute of Technology, Delhi University, New Delhi, India.<br />
* Coding: Intel P4 3.0 GHz, 1 GB RAM, Windows XP, Fedora Core 9, Visual Studio 2008, OpenCV<br />
* Coding Skills: C, C++, Visual Basic. I like coding and have developed recent interest in the field of Image Processing in last 10 months.<br />
<br />
Photography Skills:<br />
* Canon 7.2 Mega Pixel Camera<br />
** Yes, I do photograph panaroma<br />
<br />
You and Us:<br />
* Open Source Projects: <br />
** OMR-AI: [http://sourceforge.net/projects/omr-ai/| Low cost OMR processing solution]<br />
** DEDUCTO: A board game for OLPC<br />
* Vision of Project: Many a times when creating panoramas, there may be moving objects and high exposure difference which can disturb resulting panorama. I wish to create a continuous tone panorama.<br />
<br />
[http://wiki.panotools.org/Dynamic_Image_Stitching_with_High_Exposure_Difference Dynamic Image Stitching with high exposure Difference]<br />
<br />
[http://socghop.appspot.com/student_proposal/show/google/gsoc2009/satyajeet/t123877899132 GSoC Proposal]<br />
<br />
[[Category:Community:Project]]</div>Dghttp://wiki.panotools.org/SoC_2009_student_proposalsSoC 2009 student proposals2009-03-30T17:35:06Z<p>Dg: /* Dev Ghosh */</p>
<hr />
<div>Student Proposals: Student info and short project synopsis, with link to a new Wiki page where the project is expanded in full detail. See template below:<br />
<br />
== Tom Templeton: The Template Project ==<br />
<br />
* Enrolled second year master of Sample at Example University in Nowhereland.<br />
* Coding Platform: Ubuntu 8.10, Pentium 4 3GHz, 1GB RAM<br />
<br />
The [[SoC2009_Tom_Templeton | Template Project]] is all about a short description of an example project that does what it does according to specifications.<br />
<br />
<br />
== Lukáš Jirkovský ==<br />
<br />
* first year undergrad of University of West Bohemia<br />
* Platform used for coding: Arch Linux, Pentium 4 Northwood 2GHz, 1.5 GB RAM<br />
* Coding skills: C++, PHP, Java (Java only because I have to ;-) )<br />
<br />
http://socghop.appspot.com/student_proposal/show/google/gsoc2009/stativ/t123797116006 Project proposal<br />
<br />
== [[User:Leonox|León Moctezuma]]: QuickTimeVR Playback in VLC ==<br />
<br />
* Enrolled last Semester Bachelor of Computer Science at [www.buap.mx Benemérita Universidad Autónoma de Puebla].<br />
* Coding Platform: Ubuntu 8.10<br />
* [http://wiki.videolan.org/SoC_2009/QuickTimeVR_Playback Project proposal]<br />
* Mentor (VLC): Most likely Antoine Cellier.<br />
* Comentor (FreePV): Yuval Levy. Note: we're discussing this with VLC / Jean-Baptiste Kempf, including Wiimote navigation.<br />
<br />
== Dev Ghosh ==<br />
<br />
* PhD candidate, Electrical Engineering and Computer Science, Northwestern University, Evanston, Illinois<br />
* Coding Platform: Ubuntu 8.10, AMD Athlon 64 3500+, 2 GB RAM, Eclipse <br />
* Coding skills: C/C++, MATLAB, Python<br />
* Possible Mentor: Daniel German<br />
Proposal Link: Coming soon!<br />
<br />
<br />
[[Category:Community:Project]]</div>Dghttp://wiki.panotools.org/SoC_2009_student_proposalsSoC 2009 student proposals2009-03-30T17:13:54Z<p>Dg: </p>
<hr />
<div>Student Proposals: Student info and short project synopsis, with link to a new Wiki page where the project is expanded in full detail. See template below:<br />
<br />
== Tom Templeton: The Template Project ==<br />
<br />
* Enrolled second year master of Sample at Example University in Nowhereland.<br />
* Coding Platform: Ubuntu 8.10, Pentium 4 3GHz, 1GB RAM<br />
<br />
The [[SoC2009_Tom_Templeton | Template Project]] is all about a short description of an example project that does what it does according to specifications.<br />
<br />
<br />
== Lukáš Jirkovský ==<br />
<br />
* first year undergrad of University of West Bohemia<br />
* Platform used for coding: Arch Linux, Pentium 4 Northwood 2GHz, 1.5 GB RAM<br />
* Coding skills: C++, PHP, Java (Java only because I have to ;-) )<br />
<br />
http://socghop.appspot.com/student_proposal/show/google/gsoc2009/stativ/t123797116006 Project proposal<br />
<br />
== [[User:Leonox|León Moctezuma]]: QuickTimeVR Playback in VLC ==<br />
<br />
* Enrolled last Semester Bachelor of Computer Science at [www.buap.mx Benemérita Universidad Autónoma de Puebla].<br />
* Coding Platform: Ubuntu 8.10<br />
* [http://wiki.videolan.org/SoC_2009/QuickTimeVR_Playback Project proposal]<br />
* Mentor (VLC): Most likely Antoine Cellier.<br />
* Comentor (FreePV): Yuval Levy. Note: we're discussing this with VLC / Jean-Baptiste Kempf, including Wiimote navigation.<br />
<br />
== Dev Ghosh ==<br />
<br />
* PhD candidate, Northwestern University, Evanston, Illinois<br />
* Coding Platform: Ubuntu 8.10, AMD Athlon 64 3500+, 2 GB RAM, Eclipse <br />
* Coding skills: C/C++, MATLAB, Python<br />
* Possible Mentor: Daniel German<br />
Proposal Link: Coming soon!<br />
<br />
<br />
[[Category:Community:Project]]</div>Dg