Difference between revisions of "Optimization"

From PanoTools.org Wiki
Jump to navigation Jump to search
(New page: John Houghton posted this clear explanation of optimization on the ptgui mailing list. Assume for the moment that your images were taken with a perfectly positioned camera (no parallax) a...)
 
(added link to John's tutorial)
Line 94: Line 94:
 
also result from other causes, e.g. poor lens parameters, parallax,
 
also result from other causes, e.g. poor lens parameters, parallax,
 
movement as explained in the previous post.
 
movement as explained in the previous post.
 +
 +
== External links ==
 +
* Good tutorial on optimization: http://www.johnhpanos.com/optitute.htm

Revision as of 13:25, 21 December 2008

John Houghton posted this clear explanation of optimization on the ptgui mailing list.

Assume for the moment that your images were taken with a perfectly positioned camera (no parallax) and a perfect lens (no distortions). Each image is placed on the spherical stitching surface and warped (transformed) such that, as viewed from the centre of the stitching sphere, it looks exactly the same as the original scene viewed from the camera position. The lens fov determines the size of the images. It's then possible to simply slide the images around so that they align exactly in the overlapped areas. To enable the optimizer to perform this alignment, you assign control points on matching features in the images. (The optimizer does not "see" the images themselves). To align the images perfectly, a minimum of only two accurately placed contol points per overlap is sufficient, though a few more will do no harm.

Alas, a real world lens is not perfect and commonly suffers from barrel or pincushion distortion. If uncorrected in the warping process, this will prevent images aligning perfectly in the overlap areas. This issue is addressed by the lens parameters a,b and c, which together control the amount of distortion correction applied. When enabled, the optimizer will try different values of these parameters in its efforts to get all the control points to align as well as possible. Note that to enable the optimizer to align the images accurately all over an overlap area, it needs control points nicely spread over that whole area - or at least all along the anticipated line where the blender will position the seam. If you already know the optimum values for the parameters a,b and c (from previous projects), these can be entered via the lens database or a template or manually. They don't then need to be included in the optimization and the wide spread of control points is no longer so important.

Two further factors that cause problems in optimization are object movement and parallax:

Control points should not be assigned on anything likely to have moved between shots. The moving object will appear to be in different positions relative to the background in successive shots. Clearly, the optimizer will be unable to simultaneously align the stationary background and the moving object. Aligning the moving object will compromise the alignment of the background. E.g control points on moving clouds might be nicely aligned, but the horizon then might become misaligned or bent. The automatic control point generator won't avoid moving objects, so you need to be vigilant and delete points unwisely assigned.

The problems arising from parallax (due to not rotating the camera about the entrance pupil of the lens) are similar. Near objects appear to move relative to the background in sucessive shots and so should not have control points assigned to them. A distant background will be largely unaffected by parallax and should be aligned well by the optimizer. Bear in mind that in the case of fisheye lenses, parallax effects can't generally be avoided completely: the entrance pupil position varies for light rays entering the lens at different angles to the lens axis, so it's not a single point.

When the optimizer gives a bad result, with large control point distances reported, check the placement of the bad points. Try to account for why the cp distances are so big. If a point is not assigned accurately on the same feature in both images, correct it so that it is. If it's accurately positioned already, did the feature move between shots or is it likely to be affected by parallax? If so, delete the point. If not, look for another explanation. Maybe the lens parameters are bad or were not included in the optimization when they should have been. The shift parameters d and e should be optimized in the case of fisheye lenses, but be on guard for silly, unlikely values in any of the lens parameters. Be guided by experience. You can always manually reset the lens parameters to zero and try another run.

Hopefully, you will be able to achieve an optimzation in which the control point distances are very low (less than 2, say), with control points nicely spread, and the stitched result should then be fine. Misalignments due to parallax or movement may need correction in postprocessing, Smartblend can often be helpful in disguising these errors too, by routing the seams around objects during the blending.


and then in a follow up post


The optimizer will never change the positions of the points in the images. They should be assigned on identical features, but the optimizer does not access the images and cannot therefore know whether they are or are not accurately placed.

The optimzer positions the images so as to minimise the separation of the control points, ideally reducing the cp distances all to 0 when the points are perfectly aligned (if that is possible). If you have a lot of bad points and you correct the placement of one point, the optimizer will change the positions of the images to take account of this. In doing so, some points may move closer and others further apart, but the average cp distance should be reduced somewhat. Large control point distances can be due to inaccurate placement but can also result from other causes, e.g. poor lens parameters, parallax, movement as explained in the previous post.

External links