3d technology

Technologies
There are several techniques to produce and display 3D moving pictures. The basic requirement is to display offset images that are filtered separately to the left and right eye. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separate offset images to each eye, or have the lightsource split the images directionally into the viewer's eyes (no glasses required). Common 3D display technology for projecting stereoscopic image pairs to the viewer include:[3]

    With lenses:
        Anaglyphic 3D (with passive red-cyan lenses)
        Polarization 3D (with passive polarized lenses)
        Alternate-frame sequencing (with active shutter lenses)
        Head-mounted display (with a separate display positioned in front of each eye, and lenses used primarily to relax eye focus)
    Without lenses: Autostereoscopic displays, sometimes referred to commercially as Auto 3D.

Single-view displays project only one stereo pair at a time. Multi-view displays either use head tracking to change the view depending of the viewing angle, or simultaneously project multiple independent views of a scene for multiple viewers (automultiscopic); such multiple views can be created on the fly using the 2D plus depth format.

Various other display techniques have been described, such as holography, volumetric display and the Pulfrich effect, which was used by Doctor Who for Dimensions in Time in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.

Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the viewer centered along the X axis, we find that the screen coordinates are simply the sum of two terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious that the eyes will be looking along the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer is not too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.[4][5]

Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens.[6] Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed for further image processing.

After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. This data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.

2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature.