Difference between revisions of "User:Albiorix"

From PanoTools.org Wiki
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
=LightTwist (UNDER CONSTRUCTION)=
+
==Lighttwist==
==Introduction==
+
Lighttwist [http://vision3d.iro.umontreal.ca/category/projet/lighttwist/] is an application that allows projecting images on planar and non-planar surfaces. It creates an immersive experience by compensating for irregularities and deformation of the projection screen. Lighttwist deals with images, video or 3D and can also synchronize images with sound from several speakers.
  
LightTwist is a software that allows for projecting images from multiple projectors onto surfaces with complex geometry. LightTwist is currently under development at the University of Montreal in the lab of Professor Roy. The key is to fill up the field of view of the viewer with images, thus creating an immersion experience. This is achieved by projecting images on the surfaces that “wrap around” the viewer (e.g. projecting on the walls in the room).
+
=====Set up=====
 
 
==Problem description==
 
===Set up===
 
 
* surface to be projected on;
 
* surface to be projected on;
 
* projector(s);
 
* projector(s);
* camera (to be placed where the viewer will be);
+
* camera (sees what the viewer will see);
 
* network for synchronization between the camera and projectors.
 
* network for synchronization between the camera and projectors.
  
===Simplest case===
+
==Applications==
Flat screen and projector are perfectly aligned with the viewer. Since the observer is replaced by the camera, all we need to do is to find the invertible map between the pixels in the camera and pixels in the projector. The approach limited to only flat surfaces is to use homographies represented by 3x3 invertible matrices to get the transformation.
+
Great thing about Lighttwist is its scalability. Setting up network with multiple computers is not necessary. One of the solutions for home use is to get a video splitter to split the DVI output from the computer to control several projectors (there is a variety of mini projectors on the market suitable for use in the small rooms).
 +
Again, main purpose of Ligttwist is to create an immersion experience, which allows to showcase artistic work with bigger impact on the viewer. But besides that there were also proposed various different uses for the Lighttwist, such as using it in the surgery room [http://www.lighttwist.org/twiki/pub/V3D/PublicationsScientifiques/tardifj_projector_EMBC03.pdf] or in the classrooms (e.g. planerariums, presentations, as proposed by Yuval Levy).
 +
 
 +
==Description of the problem==
 +
* Simplest case
 +
: Screen to be projected on is flat, projector and camera are perfectly aligned. We only need to find the invertible map that transforms pixel coordinates from the camera space to the projector space. Only in this simple case we can use homographies represented by 3x3 invertible matrices
 +
* General case
 +
: Geometry of the screen is arbitrary, multiple projectors are not necessarily aligned with the user. Also requires catadioptric or fisheye lens to capture what observer sees in this case. Since use of homographies to establish correspondence between the camera space and projector space is no longer possible, use other techniques, such as structured light. [http://mur08.iro.umontreal.ca/wp/wp-content/uploads/2008/09/tardifj_multiprojectors_3dim03.pdf].
 +
 
 +
==Challenges==
 +
* Resolution
 +
:: There is a big difference between the resolution of a camera and projector(s). The ratio is normally 4:1 (camera having approx. 1000x1000 pixels and projectors 4000x768 pixels) and can go up to 8:1 when projecting on a hemisphere.
 +
* Size of the images
 +
:: It is hard to manage images of size 4096x768 (in case of 4 projectors). It is even harder to manage video.
 +
* Lack of content of appropriate quality
 +
:: It is not the biggest challenge when it comes to images, but there is no video camera that can support required resolution.
 +
 
 +
==GSoC project==
 +
Right now Lighttwist is able to project a single row of images. To create a seamless picture, areas illuminated by different projectors must overlap, but these areas also look brighter. To compensate for it image from one projector is blended with its neighboring projectors. One of the things that needs to be done is to extend this to allow projecting several rows to use Lighttwist for more complicated surfaces (e.g. corners, which would allow to make it more usable in the home setting, where it is not always possible to create a cylinder or a hemisphere to project on (see Applications)).
 +
 
 +
One of the simplest approaches is to simply create a mask for each projector so that areas of overlap are only illuminated by one projector. But due to vibrations and inevitable numerical errors this might result in noticeable gaps in intensity. Better thing to do is to precompute the alpha mask for each projector with values between 0.0 and 1.0. Alpha values can be treated as weights assigned to each projector at a particular pixel. So we need to develop a way to smoothly reduce the intensity of the projected image closer to edges (where it overlaps with another projected plane). Also to get a uniformly illuminated image we need to introduce the constraint that at any pixel in the image all aphas must sum up to 1.0.
 +
 
 +
We can apply a commonly used in mosaics blending technique:
 +
: for every pixel in the image:
 +
:: each projector that illuminates this pixel gets weight of 1.0 and 0.0 otherwise
 +
:: for each projector alpha = weight*factor, where factor is a distance from the pixel to the edge of the projected plane of this projector (all in camera coordinates). Normalize alphas.
 +
Alpha for a given pixel depends on the distance to the closest invisible (for this projector) pixel. Function used to compute blending doesn't have to be linear, this is just the simplest solution.
 +
This would most probably work in case, where all projectors are the same. In case they are not, we need to take care of gamma correction either before or while doing blending.
  
===General case===
 
* arbitrary geometry of the screen;
 
* multiple projectors not necessarily aligned with the user;
 
* catadioptric or fisheye lens to capture what observer sees in this case;
 
* use of homographies to establish the camera-projectors transformation is no longer possible. The solution to this is to use structured light instead.
 
  
==Structured light==
+
:::I plan to devote 40+ hours per week for GSoC. I will be able to start working on the project as soon as my exams are finished, which is 6th of May. More detailed plan of attack is described in my GSoC application [http://socghop.appspot.com/student_proposal/show/google/gsoc2009/albiorix/t123878501180].
The idea is to project a pattern of black and white stripes (horizontal and vertical to capture x and y coordinates) that encode the projector points. Stripes of width 2^(b-1) (b = 1,...,n) pixels are used to encode n bits. Get the binary coordinates bitwise. In practice bit 1 and 2 are useless since stripes get too narrow for camera to distinguish.
 
  
==Advantages==
+
NOTE: ''Most of the content of this page, unless otherwise noted, comes from the email discussions with Yuval Levy and Sébastien Roy.''
* using structured light we don't need to know the locations of the camera and projector, also there is no need to calibrate camera or projectors.
 
* projection surface: is it absolutely arbitrary. Projection is even possible on the surfaces with gaps.
 
* high quality of the projected images/videos.
 
* immersion experience.
 
  
==Challenges==
+
[http://vision3d.iro.umontreal.ca/category/projet/lighttwist/ Lighttwist official site]
* space. There are two possible ways of placing the projectors: they can be inside or outside relative to the surface and viewers. In any case projectors have to be several meters away from the surface to avoid occlusion of view.
 
* resolution. Is a big problem due both to the lack of content of appropriate quality and difficulties in managing large amounts of data. If for stale images this problem is solvable, for video it becomes a big issue, since there are no video cameras that can provide such high resolution.
 
* reconstruction precision. There is a big difference between the resolution of a camera and projector. The ratio is normally 4:1 (camera having 1000x1000 pixels and projectors having 4000 x 768 pixels approx.) and can go up to 8:1 when projecting on the hemisphere. This complicates the algorithm for mapping.
 
* setting up the network. We need one computer for each projector and a camera plus an extra computer to synchronize (LightTwist uses PureData to generate messages to control computers in the network).
 
* bandwidth. It is hard to manage images of size 4096x768 (in case of 4 projectors). It is even harder to manage video.
 
* lack of content of appropriate quality. It is not the biggest challenge when it comes to images, but there is no video camera that can support required resolution yet.
 
* limited spots from which the image looks undistorted (flat surface gives the most range, more complicated surfaces generally put some limitations on the location of the viewer).
 
  
== Future ==
+
[http://tot.sat.qc.ca/video/oct04/lighttwist.html Video of the lecture on Lighttwist]
As projectors become more available, there will be more interest in creating interactive environments. LightTwist is one of the tools that can make it easy for the user.  
 
There are a lot of possible applications for the LightTwist:
 
* it has a great artistic potential, since it can greatly enhance the impression from the artwork;
 
* use in medicine (e.g. projecting information obtained from MRI scanner directly on the body of the patient);
 
* use in education (e.g. create a planetarium in a classroom, again use for medical studies);
 
  
==LightTwist and Hugin==
+
[http://tot.sat.qc.ca/logiciels_lighttwist.html More links to videos on Lighttwist]
Both applications will win from the cooperation. LightTwist is the tool, that can best showcase the features of Hugin. Hugin on the other hand is a free tool that is capable of creating high-quality content for the LightTwist.  
 
It will be beneficial for the LigthTwist to get introduced to the OpenSource community, such as Hugin. Besides being able to participate in the development process, these people are also the potential users of the application.
 

Latest revision as of 21:35, 17 April 2009

Lighttwist

Lighttwist [1] is an application that allows projecting images on planar and non-planar surfaces. It creates an immersive experience by compensating for irregularities and deformation of the projection screen. Lighttwist deals with images, video or 3D and can also synchronize images with sound from several speakers.

Set up
  • surface to be projected on;
  • projector(s);
  • camera (sees what the viewer will see);
  • network for synchronization between the camera and projectors.

Applications

Great thing about Lighttwist is its scalability. Setting up network with multiple computers is not necessary. One of the solutions for home use is to get a video splitter to split the DVI output from the computer to control several projectors (there is a variety of mini projectors on the market suitable for use in the small rooms). Again, main purpose of Ligttwist is to create an immersion experience, which allows to showcase artistic work with bigger impact on the viewer. But besides that there were also proposed various different uses for the Lighttwist, such as using it in the surgery room [2] or in the classrooms (e.g. planerariums, presentations, as proposed by Yuval Levy).

Description of the problem

  • Simplest case
Screen to be projected on is flat, projector and camera are perfectly aligned. We only need to find the invertible map that transforms pixel coordinates from the camera space to the projector space. Only in this simple case we can use homographies represented by 3x3 invertible matrices
  • General case
Geometry of the screen is arbitrary, multiple projectors are not necessarily aligned with the user. Also requires catadioptric or fisheye lens to capture what observer sees in this case. Since use of homographies to establish correspondence between the camera space and projector space is no longer possible, use other techniques, such as structured light. [3].

Challenges

  • Resolution
There is a big difference between the resolution of a camera and projector(s). The ratio is normally 4:1 (camera having approx. 1000x1000 pixels and projectors 4000x768 pixels) and can go up to 8:1 when projecting on a hemisphere.
  • Size of the images
It is hard to manage images of size 4096x768 (in case of 4 projectors). It is even harder to manage video.
  • Lack of content of appropriate quality
It is not the biggest challenge when it comes to images, but there is no video camera that can support required resolution.

GSoC project

Right now Lighttwist is able to project a single row of images. To create a seamless picture, areas illuminated by different projectors must overlap, but these areas also look brighter. To compensate for it image from one projector is blended with its neighboring projectors. One of the things that needs to be done is to extend this to allow projecting several rows to use Lighttwist for more complicated surfaces (e.g. corners, which would allow to make it more usable in the home setting, where it is not always possible to create a cylinder or a hemisphere to project on (see Applications)).

One of the simplest approaches is to simply create a mask for each projector so that areas of overlap are only illuminated by one projector. But due to vibrations and inevitable numerical errors this might result in noticeable gaps in intensity. Better thing to do is to precompute the alpha mask for each projector with values between 0.0 and 1.0. Alpha values can be treated as weights assigned to each projector at a particular pixel. So we need to develop a way to smoothly reduce the intensity of the projected image closer to edges (where it overlaps with another projected plane). Also to get a uniformly illuminated image we need to introduce the constraint that at any pixel in the image all aphas must sum up to 1.0.

We can apply a commonly used in mosaics blending technique:

for every pixel in the image:
each projector that illuminates this pixel gets weight of 1.0 and 0.0 otherwise
for each projector alpha = weight*factor, where factor is a distance from the pixel to the edge of the projected plane of this projector (all in camera coordinates). Normalize alphas.

Alpha for a given pixel depends on the distance to the closest invisible (for this projector) pixel. Function used to compute blending doesn't have to be linear, this is just the simplest solution. This would most probably work in case, where all projectors are the same. In case they are not, we need to take care of gamma correction either before or while doing blending.


I plan to devote 40+ hours per week for GSoC. I will be able to start working on the project as soon as my exams are finished, which is 6th of May. More detailed plan of attack is described in my GSoC application [4].

NOTE: Most of the content of this page, unless otherwise noted, comes from the email discussions with Yuval Levy and Sébastien Roy.

Lighttwist official site

Video of the lecture on Lighttwist

More links to videos on Lighttwist