Electronic Design
Why put a single camera on a servo when a half dozen with a compute engine provide a better result?

Delivering 3D Video for Virtual and Augmented Reality Applications

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Remember when digital cameras were big, bulky, and expensive? Getting a single camera to provide wider visual coverage often entailed multiple servos. Getting one to provide a stable image while flying around attached to a plane or quadcopter was an even greater challenge. In fact, that is what Viooa’s solution is designed for.

The Viooa Solo (Fig. 1a) uses three image sensors to deliver a 360-deg. by 180-deg. image. That is an 8.5 Mpixel snapshot or 4K Ultra HD video. It uses on-board compute power to knit together the camera images into a single panoramic image. The module also has GPS support as well as gyroscopes and accelerometers to support digital image stabilization.

Fig. 1a

Fig. 1b1. Viooa Solo (a) has three color image sensors while the Super Resolution (b) has a single fish-eye color camera surrounded by four monochrome sensors for image stabilization.


The Viooa Super Resolution (Fig. 1b) camera uses a single color camera with a fish-eye lens for 8K images with four monochrome cameras that assist in image stabilization. They are both designed to mate with a number of UAVs from the fixed with Quest to the DJI series of quadcopters like the Phantom and Inspire 1.

Viooa’s solution is just one of many multiple-imager products. For example, the Vuze VR uses four pairs of image sensors to provide 360-deg. video for 3D virtual-reality videos. Each pair handles an overlapping quadrant and the pairs provide 3D imaging within the quadrant.

StereoLabs' ZED 2K Stereo Camera (Fig. 2) is designed for a single quadrant, but it can deliver 2.2K (4416 by 1242 pixels) video at 15 frames/s, 1080p at 30 frames/s, or 720p at 60 frames/s using 4 Mpixel sensors. The ZED SDK is a C++ library that can generate 3D point clouds using information from the camera. The ZED SDK works with Nvidia’s Jetson TX1 running NVidia’s CUDA software. The camera is connected to the Jetson TX1 by a USB 3.0 cable. It allows the system to provide depth sensing from 1 m to 15 m. This information can then be used for object recognition and eventually obstacle avoidance as well as path planning.

Fig. 22. StereoLabs’ ZED 2K Stereo Camera provides high-definition 3D depth sensing when coupled with a compute engine like NVidia’s Jetson TX1.


Some systems do post processing of video where a heftier compute server is available to knit together video and GPS information. Others, like the ZED and Jetson TX1, work in real time. Having enough heavy-duty compute power available for video processing means there may be headroom to handle other image processing chores such as obstacle and face recognition.

Multiple cameras have the advantage of covering a large area as well as providing 3D information. This may not eliminate the need for other sensors such as LIDAR for some applications, but these image sensors may be the only ones needed in many applications.

Looking for parts? Go to SourceESB.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.