We live in a three-dimensional world, but you wouldn't know it by the sensory information passing through our optic nerves. The properties of light and how it behaves in three dimensions account for our inability to truly perceive depth. Instead, our brains cobble together a rough approximation of depth using secondary cues like shadows, the attenuation of light at a distance, parallax, and relative movement. With all of this extra data processing, our brains give us a reasonably accurate two-dimensional picture of our three-dimensional world.
If you think our human skills in reproducing visual depth are lacking, though, try spending a day as a typical desktop or laptop computer. Despite superior processing power and a prodigious memory, a computer running a 3D imaging application must take its detailed mathematical knowledge of a 3D structure, flatten it, and then recreate that depth in a mere two dimensions on its limited two-dimensional screen, just as an artist does on canvas. The better rendering applications can expertly imitate those secondary cues that tell us how deep or how far a subject is. Even so, the resulting image still lacks that intuited third dimension we're accustomed to seeing with our own eyes.
Enter LightSpace Technologies Inc. and its DepthCube z1024 3D Display System. Founder and president Alan Sullivan caught on to the concept of a three-dimensional display back in 1996. When the company that started development — with a Small Business Innovation Research (SBIR) grant from the Defense Advanced Research Projects Agency (DARPA) — folded in 2002, Sullivan acquired the relevant technology and patents and founded LightSpace in 2003 with the sole purpose of bringing the DepthCube concept to fruition. Sullivan himself was on hand to display the DepthCube at Wired's recent NextFest show.
The DepthCube more accurately represents 3D space because the monitor itself consists of multiple tiered screens — 20 of them — lined up along the z-axis, front to back (see the Figure). It is a rear-projection volumetric display, using a high-speed digital light processing (DLP) projector which transmits 1000 "image slices" each second. The 20 tiered screens are electrically switchable liquid-crystal scattering shutters, of which only one is receptive — and the other 19 are transparent — at any given moment, and an image slice is projected onto the active screen. This results in a whole-volume refresh rate of 50 Hz, more than fast enough to fool our eyes into seeing one complete image. The high-speed digital interface between the computer and the DepthCube enables the volumetric display to produce an entire fresh 3D image nearly 20 times each second. As Sullivan notes, this is a little too slow for virtual reality, but certainly fast enough for today's more practical applications.
Each image slice projected onto its targeted screen depicts only 5% of the depth of the whole image. A patented 3D anti-aliasing hardware algorithm takes care of any jarring discontinuities from one z-layer to the next. Sullivan demonstrated this effect by turning anti-aliasing off and on repeatedly, which showcased the dramatic difference between a precisely graduated "contour map" and a more subtly blended 3D image. The end result provides the viewer with a smooth 3D image that recreates depth not only with forced perspective — ubiquitously used in all 2D media displays — but with true parallax and actual distance, something unique to the physically deep display.
The luminance and contrast quality aren't what we're used to — its brightness is only 120 nits, and the contrast ratio (CR) is 360:1 — but that's less of a problem than you might think. And the novelty of being able to view a genuinely 3D image on a computer screen more than makes up for it. Even better, you'll never have to worry about eye fatigue or uncomfortable gear, as you do with stereo glasses. The DepthCube's image can be viewed from any angle, unlike autostereoscopic LCDs, and it doesn't induce nausea or suffer from "jump-cut" syndrome.
So where does the DepthCube get the data necessary to construct a true 3D image? Simple: Most 3D applications already supply this z-buffer data, but it is discarded or reinterpreted into a 2D image, fed through your graphics card, and displayed on your 2D monitor. DepthCube's GLInterceptor software can capture any 3D images generated by applications written to utilize the OpenGL graphics application programming interface (API), such as SolidWorks, 3ds Max, or even games like Quake. Just install the software, fire up your preferred OpenGL 3D application, and you're good to go. LightSpace also provides an API, source code, and developer support with an eye toward writing your own DepthCube-savvy applications.
For now, the DepthCube z1024 is designed to work with any PC or supercomputer running Windows or Linux, with at least a 1.5-GHz Intel Pentium 4 processor and an NVIDIA GeForce 5 graphics card or better. The PCI-based high-speed parallel data interface is supplied with the DepthCube system, which can also accommodate a Gigabit-Ethernet connection. LightSpace Technologies is currently developing support for other operating systems and plans to offer additional advantages such as IP addressability.
"From a technological direction, for us, our goal can really be summed up in one phrase," Sullivan says. "We want to be the 3D display on the desktop." LightSpace is not interested in developing television or movie-viewing technology, display stands, or any of the countless possible derivatives. The company is solely concerned with desktop displays. As Sullivan is quick to add, "Primarily low-cost desktop displays." It's off to a good start, with a 20-in. screen it hopes to expand to 30 in., 24-bit color depth in the works and plans to increase the refresh rate to at least 72 Hz. But, Sullivan notes, "These are the early days for this type of technology. We can't leverage what people have done before." The DepthCube is being developed from the ground up.
The potential applications of the DepthCube are virtually limitless. Some immediate uses lie in CAD for engineers and architects, non-invasive 3D imaging for doctors, 3D microscopy for chemists and geneticists, and security screening at our ports. LightSpace has already been in touch with the Transportation Security Administration (TSA), Sullivan admits. "We will volunteer our hardware and even our software expertise to submit this technology for you to test," he told the TSA. The prospect of safe and comprehensive inspection of luggage and cargo is a powerful idea, but there's a lot of red tape to cut through before any such tests commence. That's okay with Sullivan. He's interested in developing the technology, not the direct applications. "We are going to partner up with people who already provide solutions in important markets, and we're going to add value to those solutions, in order for those companies to be more successful than they already are," Sullivan says. "Customers buy solutions, not technology." LightSpace will focus on providing its technology to other companies that can use it to produce new solutions for consumers. Sullivan can't provide any specifics yet, but keep an eye out for the 3D display market. With LightSpace's DepthCube z1024, the horizon of digital displays is looking a whole lot broader — and deeper.
For more information, visit LightSpace Technologies Inc.