Interactive Panoramic Viewer
See SoC 2007 overview for usage hints and a page list.
Freepv is a part of the PanoTools Software Universe and it is an effort to make a universal viewer for the different VR file formats. Thus these projects intend to add new features like basic support to SPi-V file format and a better one for QTVR files, because neither Shockwave player nor QuickTime is available for Linux/Unix users, and features like hotspots that are essential for the user interactive experience. One of the main objectives is to get a stable OpenGL renderer.
Freepv is intended to be a cross platform stand alone and plug in panorama viewer and to be able to support different type of files, freepv specially tries to fill some gaps for Linux and Unix user, where there are few support to QTVR, SPi-V and other panorama file formats.
Freepv is at an early stage of development. Thus there are a lot of improvements to do, like restructuring and cleaning up the code, adding comments and of course optimizing parts like the OpenGL renderer, I will help with this issues making a more stable renderer to work with.
Then I will have to extend or restructure the Scene and Camera class, which act like an interface between the QTVRdecoder and the renderer; these classes should be more flexible to handle different panoramic formats. With these enhanced class or classes, you will need just to implement the file decoder module, which you desire. These modules should fill these classes’ internal data structure and the renderer would work with the file data through these class or classes.
The next part of the project will consist in adding a basic SPi-V XML file format support, this will consist in adding an XML parser and implementing the SPIVdecoder, for this purpose I will need to read in detail the SPi-V XML file format specification. To the end of this part, the viewer should be able to display basic SPi-V panoramas; basically it will handle data from the meta node.
Other thing is to create a very basic freepv XML format, this as an alternative to read the different cubic faces JPEG images from the program argument list, it would be easier to specify just an XML file, that could be a Freepv file or a SPi-V file.
The last part of the project will consist in enhance the user interactive experience adding the support to hotspots; this will enable the user to make a virtual tour, switching from one scenes to another by just clicking over the hotspots. This last feature should be supported for both QTVR and SPi-V file formats, in order to this I will have to plan and design a component to handle the different scenes of a tour. Furthermore most of the hotspots in SPi-V are made with images that not necessarily are JPEG, so it will be needed to add support to files like PNG and GIF.
The hotspots could be points, squares or polygons, for the design of the hotspot interface, I should contemplate these three possible configurations. I’ll need to check the EventListener class and to implement a basic collision detection algorithm. The hotspots will support basic events, like roll over and click. Taking advantage of the new image readers formats and the SPi-V file format, it would be nice to be able to display flat images with the panoramic views, this images then could behave as buttons or external links to WebPages.
Finally Freepvis able to become an interesting and useful tool for the panorama tools community, this project intent to be one step further in the direction to achieve the Freepv main objectives.
1. Have a more stable OpenGL renderer.
2. Make SPi-V file format avalaible for Linux/Unix users.
3. Add Freepv support to Hotspots.
4. Have a more clean and understandable code.
4. Testing period
April 11: Begin a deeper familiarization with the project code, set up every thing (lib, compilers, etc…), so that I don’t have problems to begin coding.
April 23: Read articles, literature, papers and specifications, solve any doubt left with the mentors and the community.
May 9: Read with more detail the files specification and finally discuss with my mentors and the project community some implementation details.
May 28: Begin cleaning up the code and restructuring the OpenGL renderer, I'll be working in the cylindrical cubic and equirectagular rendering engines.
Jun 11: Begin with the Testing period.
Jun 18: Begin coding the file decoder class using the XML parser, planning the new image readers (PNG,GIF) and the changes that would be needed to make over Scene and Camera class.
June 29: To this date Opengl renderer must be stable, be crossplataform and must be well documented. Begin with the image readers’ implementation and begin the planning of hotspots.
July 9: Students upload code to code.google.com/hosting; mentors begin mid-term evaluations. To this date it must be able to read basic SPi-V formats and I'll begin codifying hotspot and flat image loader.
August 13: Begin with the Testing period.
August 20: Students upload code to code.google.com/hosting; mentors begin final evaluations; to this date hotspots and flat image loader should work.
August 31: The Final Day.