From Wiki
Jump to: navigation, search


Lighttwist [1] is an application that allows projecting images on planar and non-planar surfaces. It creates an immersive experience by compensating for irregularities and deformation of the projection screen. Lighttwist deals with images, video or 3D and can also synchronize images with sound from several speakers.

Set up
  • surface to be projected on;
  • projector(s);
  • camera (sees what the viewer will see);
  • network for synchronization between the camera and projectors.


Great thing about Lighttwist is its scalability. Setting up network with multiple computers is not necessary. One of the solutions for home use is to get a video splitter to split the DVI output from the computer to control several projectors (there is a variety of mini projectors on the market suitable for use in the small rooms). Again, main purpose of Ligttwist is to create an immersion experience, which allows to showcase artistic work with bigger impact on the viewer. But besides that there were also proposed various different uses for the Lighttwist, such as using it in the surgery room [2] or in the classrooms (e.g. planerariums, presentations, as proposed by Yuval Levy).

Description of the problem

  • Simplest case
Screen to be projected on is flat, projector and camera are perfectly aligned. We only need to find the invertible map that transforms pixel coordinates from the camera space to the projector space. Only in this simple case we can use homographies represented by 3x3 invertible matrices
  • General case
Geometry of the screen is arbitrary, multiple projectors are not necessarily aligned with the user. Also requires catadioptric or fisheye lens to capture what observer sees in this case. Since use of homographies to establish correspondence between the camera space and projector space is no longer possible, use other techniques, such as structured light. [3].


  • Resolution
There is a big difference between the resolution of a camera and projector(s). The ratio is normally 4:1 (camera having approx. 1000x1000 pixels and projectors 4000x768 pixels) and can go up to 8:1 when projecting on a hemisphere.
  • Size of the images
It is hard to manage images of size 4096x768 (in case of 4 projectors). It is even harder to manage video.
  • Lack of content of appropriate quality
It is not the biggest challenge when it comes to images, but there is no video camera that can support required resolution.

GSoC project

Right now Lighttwist is able to project a single row of images. To create a seamless picture, areas illuminated by different projectors must overlap, but these areas also look brighter. To compensate for it image from one projector is blended with its neighboring projectors. One of the things that needs to be done is to extend this to allow projecting several rows to use Lighttwist for more complicated surfaces (e.g. corners, which would allow to make it more usable in the home setting, where it is not always possible to create a cylinder or a hemisphere to project on (see Applications)).

One of the simplest approaches is to simply create a mask for each projector so that areas of overlap are only illuminated by one projector. But due to vibrations and inevitable numerical errors this might result in noticeable gaps in intensity. Better thing to do is to precompute the alpha mask for each projector with values between 0.0 and 1.0. Alpha values can be treated as weights assigned to each projector at a particular pixel. So we need to develop a way to smoothly reduce the intensity of the projected image closer to edges (where it overlaps with another projected plane). Also to get a uniformly illuminated image we need to introduce the constraint that at any pixel in the image all aphas must sum up to 1.0.

We can apply a commonly used in mosaics blending technique:

for every pixel in the image:
each projector that illuminates this pixel gets weight of 1.0 and 0.0 otherwise
for each projector alpha = weight*factor, where factor is a distance from the pixel to the edge of the projected plane of this projector (all in camera coordinates). Normalize alphas.

Alpha for a given pixel depends on the distance to the closest invisible (for this projector) pixel. Function used to compute blending doesn't have to be linear, this is just the simplest solution. This would most probably work in case, where all projectors are the same. In case they are not, we need to take care of gamma correction either before or while doing blending.

I plan to devote 40+ hours per week for GSoC. I will be able to start working on the project as soon as my exams are finished, which is 6th of May. More detailed plan of attack is described in my GSoC application [4].

NOTE: Most of the content of this page, unless otherwise noted, comes from the email discussions with Yuval Levy and Sébastien Roy.

Lighttwist official site

Video of the lecture on Lighttwist

More links to videos on Lighttwist