https://wiki.panotools.org/api.php?action=feedcontributions&user=Girlliyanli&feedformat=atomPanoTools.org Wiki - User contributions [en]2024-03-28T22:11:50ZUser contributionsMediaWiki 1.35.3https://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=9000User:Girlliyanli2007-07-28T12:57:56Z<p>Girlliyanli: /* Programming Experience */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8999User:Girlliyanli2007-07-28T12:57:41Z<p>Girlliyanli: /* Programming experience about panorama viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8998User:Girlliyanli2007-07-28T12:57:17Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8997User:Girlliyanli2007-07-28T12:56:44Z<p>Girlliyanli: /* Contact */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I used kd tree and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with LM. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8786User:Girlliyanli2007-04-18T13:44:12Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I used kd tree and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with LM. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8781User:Girlliyanli2007-04-17T10:41:34Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
<br />
<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8770User:Girlliyanli2007-04-12T06:05:48Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab,[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
<br />
<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8769User:Girlliyanli2007-04-10T11:41:26Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab,[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.<br />
<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8768User:Girlliyanli2007-04-10T01:41:39Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab,[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
**CLASSES:<br />
***CImg {width,height,**pdata,pitch,yaw,roll,hfov,type}<br />
***CCtrolPoint {ImgN1,ImgN2,x1,y1,x2,y2}<br />
***CBlock {CImg* imgarr,CCtrolpoint *cparr}<br />
**METHODS:<br />
***GetCP_sift(CImg* imgarr,CBlock* blockarr)<br />
***GetCP_surf(CImg* imgarr,CBlock* blockarr)<br />
***Optimise(CBlock block)<br />
***Stitching(CImg* imgarr, CImg &output)<br />
***Remap(CImg src,CImg &des)<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.<br />
<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8748User:Girlliyanli2007-04-06T06:40:30Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab,[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8742User:Girlliyanli2007-04-04T00:44:09Z<p>Girlliyanli: /* Contact */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.<br />
== Contact ==<br />
*msn:girlliyanli@hotmail.com<br />
*gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8741User:Girlliyanli2007-04-04T00:43:15Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.<br />
== Contact ==<br />
msn:girlliyanli@hotmail.com<br />
gtalk:girlliyanli@gmail.com</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8705User:Girlliyanli2007-03-31T12:18:34Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8690User:Girlliyanli2007-03-29T02:15:39Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project and I am confident I can handle it within three months.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuilting panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Writng document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8689User:Girlliyanli2007-03-29T02:13:42Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this project and I am confident I can handle it within three months.<br />
**Phase 1(1-2 weeks): Reading source code of panotools and analysing its structure.Writing a document about its structure and interface' description.<br />
**Phase 2(6-7 weeks): Rebuiltint panotools, removing unused source code and writing a clear interface.<br />
**Phase 3(4-5 weeks): Optimising the souce code and writing commends.Wring document about new panotools.<br />
<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8672User:Girlliyanli2007-03-27T02:41:02Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully on VC.net. I debugged it step by step and gradually realized how it worked. Then I began to extract useful code I needed and found the system was monolithic. So I decided to build my system on it. I removed the wxwindow and built an interface with MSVC, I also integrated sift's code and Enblend'code into this framwork. The interface is simple and the system is totally automatic, after inputting several images, the system can divided them into parts and stitch each part automatically into cylindrical or planar or spherical panorama.<br />
I know hugin is a GUI using panotools as core engine, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well. I also want to rebuilt it, hoping it more easily used in the future for me.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this item and I think I can handle this subproject.<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=Historical:Google_SoC_2007&diff=8669Historical:Google SoC 20072007-03-26T07:51:14Z<p>Girlliyanli: /* Next Deadline */</p>
<hr />
<div>We are passionate about [http://www.worldwidepanorama.com/ our images]. We look for students passionate about their code to help us make better images.<br />
<br />
See [[SoC 2007 overview]] for usage hints and a page list.<br />
==Next Deadline==<br />
* March 24: Students applications<br />
* [http://code.google.com/support/bin/answer.py?answer=60325&topic=10729 program timeline]<br />
<br />
==Students==<br />
Want to participate? We want to help you be a successful candidate.<br />
* '''Deadline: March 24'''<br />
* join the [http://lists.sourceforge.net/lists/listinfo/panotools-devel mailing list]<br />
* read [http://groups.google.com/group/google-summer-of-code-announce/web/guide-to-the-gsoc-web-app-for-student-applicants Applicants Guide] and check if you qualify.<br />
* prepare the answers to [http://wiki.panotools.org/SoC2007_application#Does_your_organization_have_an_application_template_you_would_like_to_see_students_use.3F_If_so.2C_please_provide_it_now]<br />
* read below about our software universe.<br />
* read the [[SoC2007_projects|projects]] ideas.<br />
* if you have an idea that is not listed there, please propose it to the mailing list.<br />
* if you see an idea there that you like, take ownership of it.<br />
* '''Contact the mentioned mentor and/or the mailing list if no mentor is mentioned.'''<br />
* refine the idea, add detail, describe what you intend to do and how, work with your mentor and with the steering committee at fleshing your project out so that Google's Open Source Program Office will accept it.<br />
* on the idea page add a short bio, motivate why you are interested in taking up that particular idea, your relationship to panorama making in general and to hugin in particular.<br />
* [http://code.google.com/soc/student.html apply] to Google before March 24. We will do our outmost to help you to a successful application.<br />
* if your application is accepted by Google, be ready to submit a detail work plan with the sub-tasks of your project and the time you intend to allocate to each of them. We will help you shape that too.<br />
<br />
==Project Ideas / Our Software Universe==<br />
feel free to add / specify the [[SoC2007_projects|projects]] ideas.<br />
<br />
While we might consider application from students to write code in related fields / other application, our interest is to recruit students to work on these tools:<br />
<br />
===hugin===<br />
hugin is the hub of our activity. It is the most advanced OpenSource GUI to create stitched panoramas from 360°x180° full sphericals [http://en.wikipedia.org/wiki/QuickTime_VR] to gigapixel size stitched images. Moreover it has some unique features such as the correction of chromatic aberration or (soon) HDR stitching. [http://hugin.sourceforge.net/hugin Project page]<br />
<br />
===panotools===<br />
panotools is the library powering the magic. It's an extremely versatile library and can be used not only to seamlessly stitch images, but also correct many lens distortions or remap images to different projection.<br />
<br />
Initially developed by Professor Helmut Dersch in 1998, this set of tools to warp and stitch images was born ahead of their time. Only a decade later competing products of equal versatility and functionality started to appear.<br />
<br />
A number of proprietary GUIs have been commercialized for the panotools, notably PTgui, PTassembler, and PTmac.<br />
<br />
[http://hugin.sourceforge.net/panotools Project Page]<br />
<br />
===Control Point Generator===<br />
One of the critical tasks of stitching images is to register the position of each image to another with so called control points. hugin works with a plug-in for that. The most popular are autopano and autopano-SIFT.<br />
<br />
===blending===<br />
Once the images are registered in space and warped by panotools, the seams are still visible and need to be blended. Again, hugin works with a plug-in for that. The most popular is [http://enblend.sourceforge.net/ Enblend]<br />
<br />
===RAW conversions / HDR / tonemapping / other digital photo techniques===<br />
Panorama creation presents some unique challenges to the standard image processing workflow in modern digital photography.<br />
* lens distorsions and their effect on RAW conversion.<br />
* higher [[dynamic range]] across the image.<br />
<br />
===freepv panorama viewer===<br />
The resulting images are 2D, but a full spherical 360°x180° panorama can be reprojected to create a VR. There are a number of technologies to view VR and [http://freepv.sourceforge.net/ freepv] is an effort to build a universal viewer<br />
<br />
==Organization==<br />
We have successfully applied as [[SoC2007_application|mentoring organization]]<br />
<br />
===[[SoC2007_application#Mentors|Mentors]]===<br />
* Pablo, d'Angelo, Germany<br />
* Herbert Bay, Switzerland<br />
* John Cupitt, United Kingdom<br />
* Daniel M. German, Canada<br />
* JD Smith, USA<br />
<br />
===[[SoC2007_application#Coordinators|Organizers]]===<br />
Our Organizers<br />
* Yuval Levy, Canada<br />
* Alexandre Prokoudine, Russia<br />
<br />
===[[SoC2007_application#Steering_Committee|Steering Committee]]===<br />
We have a steering committee of experienced industry and community leaders to advise the Mentors and Students.<br />
* [[SoC2007_application#G._Donald_Bain|Don Bain]], USA, University of California Berkeley, co-founder of the WWP and board member of IVRPA.<br />
* [[SoC2007_application#Aldo_Hoeben|Aldo Hoeben]], The Netherlands, devloper of the SPi-V shockwave panorama engine, and board memberf of IVRPA.<br />
* [[SoC2007_application#Erik_Krause|Erik Krause]], Germany, a well-known member of the user community around PanoTools.<br />
* [[SoC2007_application#Mickael_Therer|Mickael Therer]], Belgium, Photographer.<br />
* [[SoC2007_application#Ken_Turkowski|Ken Turkowski]], USA, of the original QuickTimeVR team.<br />
* [[SoC2007_application#Luca_N._Vascon|Luca N. Vascon]], Italy, professor at the Multimedia Laboratory of IUAV university in Venice.<br />
<br />
===Community Backing===<br />
This organization is supported and endorsed by [[SoC2007_Supporters|these]] people and organizations.<br />
<br />
[[Category:Community:Project]]</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8668User:Girlliyanli2007-03-26T07:29:03Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to expend them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools] I have never stopped studying panotool. I read it from time to time and find it amazing. Once I wrote a rough description about its functions'interface. I am passionate with this item and I think I can handle this subproject.<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images] The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. But up to now, there are still some conditions SIFT can work well such as the camera moves largely. Because I have programmed on sift and consulted many materials about other key points, I think it is interesting and I want to continue studying on this field.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm] I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of image blending and am eager to solve the above problem.<br />
<br />
I am very glad when learning Hugin/panotools participates in this year's GSoC. Because I am always studying hugin/panotools and want to learn them more. I am eager to colaborate with people who also interested on this field. I think this opportunity is very meaningful for me, so I will devote myself fully on this project once I am accepted. I am totally free during this holiday. Three month or even more is no problem. That period is long enough for me to finish a task, I am confident. I am sure I wouldn't let you down.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8665User:Girlliyanli2007-03-26T06:42:48Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] and autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to program it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin,vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and knows its interface well.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8664User:Girlliyanli2007-03-26T06:39:26Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] as well as autopano-sift's source code carefully and knew clearly how it works. Then I began to port it from c# to c++. It took me more than one month. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8663User:Girlliyanli2007-03-26T06:34:42Z<p>Girlliyanli: /* Programming experience about panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. The other materials he recommended are [http://hugin.sourceforge.net/ Hugin], panotools ,[http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift], [http://enblend.sourceforge.net/ Enblend],So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8662User:Girlliyanli2007-03-26T06:29:48Z<p>Girlliyanli: /* Programming experience about panorama viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor's advise, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", whose structure we have defined. Now we can open this file on scene-tour shower and tour around the scene.<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8661User:Girlliyanli2007-03-26T06:23:53Z<p>Girlliyanli: /* Programming Experience */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panorama viewer system running on local machine after port the ptviewer's souce code from jave to c++. I also designed a scene-tour system based on panorama viewer. Later, I began to do study on image stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8658User:Girlliyanli2007-03-26T01:53:20Z<p>Girlliyanli: /* Programming experience about panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panorama viewer ==<br />
*The first panorama material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8657User:Girlliyanli2007-03-26T01:18:03Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is [http://www.cs.ubc.ca/~lowe/keypoints/ SIFT(2004)]. I am now studying [http://www.vision.ee.ethz.ch/~surf/index.html SURF(2006)]which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8656User:Girlliyanli2007-03-26T01:15:05Z<p>Girlliyanli: /* Panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Programming experience about panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is SIFT(2004). I am now studying SURF(2006) which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8655User:Girlliyanli2007-03-26T01:14:32Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience about panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is SIFT(2004). I am now studying SURF(2006) which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8608User:Girlliyanli2007-03-24T14:19:27Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is SIFT(2004). I am now studying SURF(2006) which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]<br />
I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8607User:Girlliyanli2007-03-24T14:18:52Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
The key point I studied most is SIFT(2004). I am now studying SURF(2006) which is said to be faster and more efficient. We all know if the stitching process can be automatic or not mostly depends on if the control point can be extract properly and matched correctly. So it is a very important job. Because I programmed on sift and consulted material about other key points. It is still difficult to get a good descriptor if the cameras move largely.<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
I studied and programmed multiple resolution spine(1983),which was widely used as a better blending method because it can avoid slight ghost and blend seamlessly. The source code I integrated in my system is from Enblend. But when the images are to stitch have a running bus, how can we remove it? I know this problem is tough to handle, but it should be solved, because it's common that the images to be stitched have a moving object. I have some knowledge of it and am passionate with it.<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]I have never stopped studying panotool. I read it from time to time. Once I wrote a rough structure about its functions’ interface following my tutor' demand. But I have not figure out its principle until now. While it seems that once we know its interfaces well ,we can reconstruct it.</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8596User:Girlliyanli2007-03-24T13:26:22Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
After three years study of panorama, I know that panotools is the most wonderfully open source code and Hugin is the greatest GUI open source code. I appreciate their contribution. But neither of them is perfect. There are much work to do to complement them. Those subprojects showed on websites are all very important. I am sure once they are completed. It will be a milestone in the field of panorama and benefits all researchers. Because I analyzed panotools and hugin for a while and knew them well. I have eager to do something on it, so I apply for this project, hoping have a chance.<br />
The items I am interested:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8585User:Girlliyanli2007-03-24T09:37:19Z<p>Girlliyanli: /* Programming experience of panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
I am applying for:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8584User:Girlliyanli2007-03-24T09:36:43Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Programming experience of panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
I am applying for:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8583User:Girlliyanli2007-03-24T08:40:34Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
I am applying for:<br />
*[http://wiki.panotools.org/SoC2007_project_Feature_Descriptor Automatic feature detection for panoramic images]<br />
*[http://wiki.panotools.org/SoC2007_projects#Anti-ghosting_HDR_panorama_blending_and_merging_algorithm Anti-ghosting HDR panorama blending and merging algorithm]<br />
*[http://wiki.panotools.org/SoC2007_project_Panotools_Architecture Architectural Overhaul of Panotools]</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8582User:Girlliyanli2007-03-24T08:36:35Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
I am applying for:<br />
*Automatic feature detection for panoramic images<br />
*Anti-ghosting HDR panorama blending and merging algorithm <br />
*Architectural Overhaul of Panotools</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8581User:Girlliyanli2007-03-24T08:35:29Z<p>Girlliyanli: /* Google Summer of Code 2007 */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
My intrested projects:<br />
*Automatic feature detection for panoramic images<br />
*Anti-ghosting HDR panorama blending and merging algorithm <br />
*Architectural Overhaul of Panotools</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8580User:Girlliyanli2007-03-24T07:44:51Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student.<br />
**No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate.<br />
**Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate.<br />
**Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student.<br />
**Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8579User:Girlliyanli2007-03-24T07:43:22Z<p>Girlliyanli: /* Programming Experience */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
*My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
*When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8578User:Girlliyanli2007-03-24T07:42:44Z<p>Girlliyanli: /* Panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
*Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. <br />
*Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency.<br />
*Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. <br />
*In fact, I have never given up building up a totally automatic system.Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.<br />
I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8577User:Girlliyanli2007-03-24T07:37:38Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
*The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
*One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8576User:Girlliyanli2007-03-24T07:20:48Z<p>Girlliyanli: /* Panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and integrating sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8575User:Girlliyanli2007-03-24T07:15:46Z<p>Girlliyanli: /* Programming Experience */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My main programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8574User:Girlliyanli2007-03-24T07:08:01Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know,[http://webuser.hs-furtwangen.de/~dersch/ ptviewer]was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8573User:Girlliyanli2007-03-24T07:03:40Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8572User:Girlliyanli2007-03-24T07:02:24Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University], Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University],Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8571User:Girlliyanli2007-03-24T07:01:30Z<p>Girlliyanli: /* Education Background */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University], Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science,[http://www.sdu.edu.cn/english05/ Shandong University], Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs),[http://ev.buaa.edu.cn/ Beijing University of Aeronautics and Astronautics],Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8570User:Girlliyanli2007-03-24T03:04:54Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,Shandong University, Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science, Shandong University, Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs), Beijing University of Aeronautics and Astronautics, Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website[http://www.jietu.com Jietu]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8569User:Girlliyanli2007-03-24T03:04:11Z<p>Girlliyanli: /* Panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,Shandong University, Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science, Shandong University, Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs), Beijing University of Aeronautics and Astronautics, Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website-Jietu [http://www.jietu.com]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system monolithic. So I decided to build my system on it, using hugin as framework and combining sift's code and blending'code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8568User:Girlliyanli2007-03-24T03:00:09Z<p>Girlliyanli: /* Panoramic stitching */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,Shandong University, Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science, Shandong University, Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs), Beijing University of Aeronautics and Astronautics, Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website-Jietu [http://www.jietu.com]<br />
<br />
==Panoramic stitching==<br />
Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step.<br />
Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift]carefully and knew clearly how it works. Then I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift],which is source code of sift in c#. It took me more than one month port it from c# to c++. Later on, the program was modified from time to time, being more efficient and more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. I download a source code about kd tree from[http://www.cs.umd.edu/~mount/ANN/ ANN] and found it worked well and fast. As for the outliners' removing(unmatched pairs), we made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection we used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM]. In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system agglutinate. So I decided to build my system on it, using hugin as framework and combining sift’ code and blending code from Enblend.I know hugin is a GUI based on panotools, which is a wonderful core code. I have never stop analyzing the principle of it and will never.<br />
<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8567User:Girlliyanli2007-03-24T02:38:32Z<p>Girlliyanli: /* Panoramic viewer */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,Shandong University, Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science, Shandong University, Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs), Beijing University of Aeronautics and Astronautics, Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
The first panoramic material I consulted was[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html Big Ben’s panorama tutorial].<br />
The first system about panorama I programmed was about panoramic viewer. As we all know, ptviewer was used to view panorama, but the source code I could find then was wrote in java which was from a java applet. So it seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D. So all the source code is c++.The principle of ptviewer I think is as following:<br />
what should been inputted is a spherical panorama, what we want to get is a frame on the screen. There is a virtual sphere between them. The frame can be created according three parameter of the virtual sphere: pan angle, tilt angle and hfov angle which can be controlled on VC.NET by mouse event, keyboard event and menu or tool. At first, the panorama are projected on the virtual sphere according to two formula, then the frame are projected to the virtual sphere according to two formula controlled by the three parameters, now the relationship between a frame and panorama are built. With the backward projection, we can get the corresponding pixel from panorama to the frame.<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I built a scene-tour system. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website-Jietu [http://www.jietu.com]<br />
<br />
==Panoramic stitching==<br />
It’s natural for us to continually study panorama. Of course, the next step is acquiring a panorama. Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step. Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] carefully and knew clearly how it works. But it was still difficulty for me to program it all by myself. So I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift]. It provided us with source code of sift in c#. I was not afraid of porting it to C++. It took me more than one month made it. Later on, I modified the program from time to time, optimizing it and making it more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. On one discussion group, a doctoral student delivered a speech on his research, of which I learned kd tree works well on multiple dimension searching and there were much material on it. I download a source code from[http://www.cs.umd.edu/~mount/ANN/ ANN]and compiled it, finding it indeed worked well and fast. As for the outliners' removing(unmatched pairs), I made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection I used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM].In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system agglutinate. So I decided to build my system on it, using hugin as framework and combining sift’ code and blending code from Enblend and removed some unused function such as interface and wxwindow. I did it within two month. I know hugin is based on panotools, which is a wonderful core code. It needs us to read it carefully, utilize it fully and expanding it. I have never stop analyzing the principle of it.<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanlihttps://wiki.panotools.org/index.php?title=User:Girlliyanli&diff=8565User:Girlliyanli2007-03-24T02:02:10Z<p>Girlliyanli: /* Programming Experience */</p>
<hr />
<div>My name is Yanli Li, I'm a chinese student, doing study on panorama.<br />
== Education Background ==<br />
* 1997.09-2000.07 High school student. No.1 Middle School of KaiFeng County,Henai province, China<br />
* 2000.09-2004.07 Undergraduate. Department of Computer Science,Shandong University, Jinan, Shandong province, China<br />
* 2004.09-2007.07 Postgraduate. Human-Computer Interface and Virtual Reality Lab, Department of Computer Science, Shandong University, Jinan, Shandong, China<br />
* 2007.09- Doctoral student. Virtual Reality lab(one of state key labs), Beijing University of Aeronautics and Astronautics, Beijing,China<br />
<br />
== Programming Experience ==<br />
My favorite programming language is c/c++. I started to program in it since 2001 and I have never stopped since then. Most of my designs are about image processing. My graduate design which was named System of Text-Image Preprocessing and awarded Shandong University'Excellent Undergraduate Design involved lots of knowledge about image's processing, so I know well about such field.<br />
When I enrolled in HCI&VR lab in 2004. I began to do study on panorama. At first, I analyzed ptviewer's java code and bulit a panoramic view system running on local machine after port the souce code from jave to c. I also designed a scene-tour system based on panoramic viewer. Later, I began to do study on panoramic stitching. After studying the souce code of Hugin, sift, panotools and enblend, I built a panoramic stitching system based on vc.net to stitch images. It is a totally automatic system: the framework I used is hugin, the control point I used is sift, the matching part is based on kd tree, the blending method I used is Multiresolution Spline, and modification was made on RANSAC so that outliners can been removed more quickly.<br />
<br />
== Panoramic viewer ==<br />
I firstly knew the word "Panorama" was on the lab's discussion group, where my tutor asked me if I had interest to do research on panorama. At that time I knew nothing about it, but had eager to learn a little. So my tutor gave me some material, one is Big Ben’s panorama tutorial[http://www.path.unimelb.edu.au/~bernardk/tutorials/360/index.html],the other is panotools. Later, I found panorama was such an amazing image, especially when I viewed it through ptviewer. I thought it will be funny to know more about it.<br />
Then my tutor said to me, ptviewer was used to view panorama, it had source code, but was wrote in java, so that the java applet could be easily used on website. He gave me another panoramic viewer PFSview; it could be run on local machine but without source code. It seemed if we wanted to view panorama on local machine, we had to port the ptviewer from java to C++. Then it took me about two month to analyze the interface of the source code and wrote some documents about its structure and built a framework and designed several classes and wrote the detail code and remove small bug. Although my tutor advised me use direct3d to speed up. I found it worked well without the direct3D.After that period, now I know well about its principle: what should been inputted is only a spherical panorama, what we want to get is the image on the screen, which can be called view-image, we can built the two images ‘relationship, in fact, there is a virtual sphere between them. As the change of three parameter of the virtual sphere: pan angle, tilt angle and hfov angle, those change being controlled in VC.NET by mouse event, keyboard event and menu or tool, there are some mathematic formula handling the pixels' correspond between panoramic image and view image. Detail information can been found (here?).<br />
<br />
One panorama can only cover view from one site, the information is not enough. If we want to tour around a scene, what should to do? Yes, it needs us to do more. Following my tutor advice, I began to build scene-tour system. It took about two month to finish such a job. The scene tour system comprises two parts, one is to build the scene tour, being called scene-tour designer, another is to view the scene tour, being called scene-tour shower. It justly involved interface design without any intricate theory: At first, we should get some material, including the scene's map and several panoramas taken from the scene. Secondly, we input the map and build the relationship between the map's site and corresponding panorama with the scene-tour designer and adjust the directions. Last, we saved and get a file end with ".tour", which is structure we defined. Now we can view this file on scene-tour shower and tour around the scene. To build a demo, I downloaded material from a famous Chinese panoramic website-Jietu [http://www.jietu.com].<br />
<br />
==Panoramic stitching==<br />
It’s natural for us to continually study panorama. Of course, the next step is acquiring a panorama. Spherical panoramas which can be viewed through ptviewer are usually stitched from images captured with fisheye cameras. It is inconvenient, expensive and complex. Can we stitch overlapped images taken with hand-hold cameras? Under the guide of this goal, I consulted many articles. [http://www.cs.ubc.ca/~mbrown/autostitch/autostitch.html M.Brown’s recognizing panorama]was recommended by my tutor. He said if I could build a system according to this article, we would make it. So I began to do it step by step. Firstly, we should extract control points(sift) from each image. I read [http://www.cs.ubc.ca/~lowe/keypoints/link Prof.Lown's sift] carefully and knew clearly how it works. But it was still difficulty for me to program it all by myself. So I searched on web and found [http://user.cs.tu-berlin.de/~nowozin/autopano-sift/ autopano-sift]. It provided us with source code of sift in c#. I was not afraid of porting it to C++. It took me more than one month made it. Later on, I modified the program from time to time, optimizing it and making it more easily read. Secondly, key points should been matched. M.Brown matched key points with Best Bin Fast. On one discussion group, a doctoral student delivered a speech on his research, of which I learned kd tree works well on multiple dimension searching and there were much material on it. I download a source code from[http://www.cs.umd.edu/~mount/ANN/ ANN]and compiled it, finding it indeed worked well and fast. As for the outliners' removing(unmatched pairs), I made modification on RANSAC to removed them and our experiment verified its efficiency. Thirdly, we should calculate parameters, so that we can build relationship among those images with those parameter. M.Brown used bundle-adjustment to calculate parameter. It is hard for me to realize it and I did not find available source code. Detaining for a month, I changed my mind and started to build a comparatively easier system. It is limited and partial automatic. One of inputted image should be a central image, that is, other images are all overlapped with it. The system can recognize this central image and stitching them together. The projection I used is 8-parameters matrix which is calculate with [http://www.ics.forth.gr/~lourakis/levmar/ LM].In fact, I have never given up building up a totally automatic system. Thanks to [http://hugin.sourceforge.net/ Hugin], I made it at last. After downloading hugin and download vigra and wxwindow. I compiled them successfully with VC.net. I debugged it step by step and gradually realize how it works. Then I began to extract useful code I needed and found the system agglutinate. So I decided to build my system on it, using hugin as framework and combining sift’ code and blending code from Enblend and removed some unused function such as interface and wxwindow. I did it within two month. I know hugin is based on panotools, which is a wonderful core code. It needs us to read it carefully, utilize it fully and expanding it. I have never stop analyzing the principle of it.<br />
== Google Summer of Code 2007 ==<br />
<br />
Later</div>Girlliyanli