https://wiki.panotools.org/api.php?action=feedcontributions&user=Kyle&feedformat=atomPanoTools.org Wiki - User contributions [en]2019-10-20T14:29:30ZUser contributionsMediaWiki 1.31.4https://wiki.panotools.org/index.php?title=MatchPoint&diff=15678MatchPoint2014-12-19T17:08:14Z<p>Kyle: update some text formatting to use correct Wiki syntax, move example usage to usage section, correct some typographical errors</p>
<hr />
<div>== Introduction ==<br />
MatchPoint is a next generation CP generator. The result of a GSoC2007 project [[SoC2007_project_Feature_Descriptor]]. Currently it offers only keypoint detection but not matching (matching was part of other GSoC 07 project that was not carried out) and can currently be used only as a replacement for generatekeys from [[autopano-sift]] or [[autopano-sift-C]]. <br />
<br />
A current version of matchpoint can be obtained via the [[hugin]] project.<br />
<br />
Goal of this MatchPoint is to create a complete control point suite that will be used by [[hugin]] as a replacement(or at least an alternative) to existing autopano suites.<br />
<br />
If you want to use MatchPoint in the process of creating panoramic images now(when not all features are implemented) you will have to use autopano-sift also.<br />
<br />
<br />
== Command line usage ==<br />
Usage: <br /> <br />
;MatchPoint [options] image1.jpg output.key:<br /><br />
Parameters:<br />
;-v: verbose output<br />
;-t: generate keypoint file for MATLAB test suite (file name is generated using formula: image1.jpg.key)<br />
Arguments:<br />
;image1.jpg: Path to image to be analyzed.<br />
;output.key: Output keypoint file.<br />
<br />
=== Example Workflow ===<br />
<br />
# first extract features from the first image and output them to the image1.key:<br />
MatchPoint image1.jpg image1.key <br />
<br />
# for second image:<br />
MatchPoint image2.jpg image2.key <br />
<br />
# match features from the two generated files using autopano:<br />
autopano project.pto image1.key image2.key <br />
<br />
# open the project file in Hugin:<br />
hugin project.pto<br />
<br />
<br />
== Algorithm description ==<br />
<br />
=== Integral images ===<br />
<br />
Before detection process images are integrated. Each element-pixel (x,y) of integral image represents sum of pixels from (0,0) to (x,y) on initial image. This makes calculating sum of a region much faster. In addition, convolution at any scale has equal time complexity.<br />
This part is necessary only when using box filters for detection.<br />
<br />
=== Feature detection ===<br />
<br />
Points are detected with Hessian Detector using box filters. Here is a description of detection process over time. This may not be entirely compatible flow with code's flow, because some parts are simultaneous(e.q. first two steps).<br />
<br />
==== Convolution of Pixels ====<br />
<br />
Convolution of a pixel at certain scale is carried out with approximation of Gauss 2D function - this is called box filter detection. Each kernel has 4 convolution filters(3 of them are unique - xy and yx filters are the same). The resulting value is then the determinant of Hessian matrix where elements represent convolution values of 4 filters. <br />
<br />
Kernel sizes for convolution are:<br /><br />
9,15,21,27, (1st octave)<br /><br />
15,27,39,51, (2nd octave)<br /><br />
21,45,69,93 (3rd octave)<br /><br />
<br />
MatchPoint features also offers two ways of convolution:<br />
* box filter: faster and preferable way<br />
* sliding window(implemented for convenience but needs debug): slower, more accurate but also more sensitive to noise<br />
<br />
==== Finding Maxima ====<br />
<br />
In order to achieve invariance to scaling, detection needs to find maxima of Hessian determinant across many scales. To handle this, octaves were introduced. Octaves are interpolation of determinants at various scales(usually two scales). MatchPoint offers detection at max 3 octaves(by setting a parameter). E.q. at first octave a point can be detected at scale 1.6(=((9+15/2)*1.2)/9 where 1.2 is initial scale and 9 is initial kernel size) or 3.2(=((21+27/2)*1.2)/9. Keypoint's main scale is then selected according to the highest value of Hessian determinant.<br />
<br />
==== Selecting Points ====<br />
<br />
Next step is to select points with high values of their determinants(regardless of scale where points where detected) that represent invariant keypoints that will be used further processing. This is achieved using a fixed threshold which should be set low(because otherwise it may happen that no points would be detected). <br />
<br />
Then non-maxima suppression is applied(only points with local maxima of determinants are selected).<br />
<br />
At this point we have a list of points that can vary in length, because threshold from previous step is hard-coded. This can result in over 200.000 points for larger images and that is too much(regardless of image size we need the same amount of control points for all images - min 3 control point/overlapping images), so we need to strip down the list below 10.000 points (this number is set intuitively and it is based on consumed time). (Note: this work is progress).<br />
<br />
=== Feature description ===<br />
<br />
==== Orientation Assignment ====<br />
<br />
Scale invariance is achieved by assigning main orientation of the interest point using special algorithm proposed by Herbert Bay. Efficiency of this algorithm has not been yet tested, therefore the executable of MatchPoint does not use any orientation assignment.<br />
<br />
==== Shape Context descriptors ====<br />
<br />
To each interest point a 36 element descriptor is assigned. Elements of this descriptors are organized according to Shape Context descriptor proposed by [http://www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/sc_digits.html S. Belongie, J. Malik and J. Puzicha] and adapted by [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html K. Mikolajczyk] for the purpose of finding control points. <br />
<br />
Steps in creating descriptors:<br />
# Detect edges in the region around the interest point(region size depends on the scale at which point was detected). MatchPoint uses vigra's API(Canny edge detection) for this.<br />
# Based on relative location of edge elements create a histogram with 36 values. Use log-polar values. Edge point contribution to the histogram is based on it's magnitude and orientation.<br />
See reference papers for details.<br />
<br />
== References ==<br />
* [http://www.vision.ee.ethz.ch/~surf/ Speeded Up Robust Features - SURF]<br />
* [http://www.robots.ox.ac.uk/~vgg/research/affine Matlab suite for testing, papers for detection, descriptors and evaluation by K. Mikolajczyk]<br />
* [http://www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/sc_digits.html Shape Context descriptors]<br />
[[Category:Community:Project]]<br />
[[Category:Software:Platform:Windows]]<br />
[[Category:Software:Platform:Linux]]<br />
[[Category:Software:Platform:Mac OS X]]<br />
[[Category:Software:Hugin]]</div>Kyle