Historical:Tlalli

From PanoTools.org Wiki
Revision as of 19:06, 15 December 2007 by Erik Krause (talk | contribs) (glossaryfied)
Jump to navigation Jump to search

Tlalli is a recent fork of Panotools that wants to focus in the remapping part of panorama generation as opposed to the registration.

The homepage of this project is http://www.tlalli.org

I believe that panotools should be divided (at least from an architectural point of view) into 2 main parts: optimization and the projection (there is a third module, which involved projection computations, that needs to be used by both modules).

The separation of projection from optimization is probably best exemplified by Flexify, which takes an equirectangular (I haven't use it yet, so this is my understanding) and produces an output image after it has applied a transformation. Flexify does not need to know anything about registration and optimization.

What I am proposing here is a "programmable flexify", or a super math-map. This is the direction where I want to take Tlalli.


Let me elaborate:

The current transformation model, for an equirectangular as input, is a computation for each pixel in the output image. That is, given an image I, a list of optional parameters, compute its projection I':

I' = f(I, [p])

What if we have two fs, and compose them?

For instance, the output of one projection is used for another projection:

I = g(f(I,[p]), [p'])

This only makes sense if the output of f is compatible with the input of g. For example, compute the Cassini of an equirectangular (rolling it by 90 degrees), and then compute the Mercator of the Cassini; the result is a transverse mercator. We have implemented transverse mercator as a composition of rolling the equirectangular + mercator. Think of the possibilities.

In the current computational model this is done in steps: Generate I' then apply g on I'. There are two disadvantages to this model:

  • Error increases
  • I/O operations are proportional to the number of functions.

What I envision is a system that will allow me to add my own functions to the computational stack, in the same way that layers work in Photoshop. So the composition happens at the pixel level, not the image level.

For example, if you like to have your logo in the nadir, the you can create a function "Logo" that you insert into the computational stack. This function can take as a parameter a string. Then when the panorama is projected/computed, the logo is inserted right on the spot.

If the architecture is open enough, this can lead to "plugin functions" that do things we can't even imagine today. We will only require the developer to create a function with certain properties, and register it.

This system will be very powerful and useful beyond panoramas.