Intel’s latest RealSense Depth Cameras combine active, time-of-flight 3D sensors with RGB cameras. The $149 D415 (Fig. 1) and the $179 D435 (Fig. 2) cameras provide a depth stream output of 720p at 90 frames/s. On the RGB side, the level of detail increases to 1080p.
1. Intel’s D415 RealSense Depth Camera delivers 90 frames/s at 720p resolution with a rolling image shutter.
The two differ in terms of shutter type and field of view. The D415 uses a rolling shutter while the D435 employs a global shutter. The D435 has a wider field of view (91.2 by 65.5 by 100.6 deg.) compared to the D415 (69.4 by 42.5 by 77 deg.). Consequently, the D435 can be used where objects will be closer to the camera, such as small robots allowing a single, fixed camera to provide full coverage of the area in front of the robot. The D415 provides precise measurements of small objects at a distance. The D435’s global shutter also works better when fast moving objects are in view.
2. The D435 RealSense Depth Camera uses a global shutter and provides a wide field of view.
Both cameras have a range of about 10 meters. They’re designed to operate indoors and outdoors.
The systems employ a 28-nm vision processor that actually supports up to five MIPI Camera Serial Interface 2 lanes, although all of it is contained within the cameras. They’re controlled using a USB 3 interface with a Type-C connector on the side. Each camera has a 1/4-20 UNC thread-mounting point on the bottom and a pair of M3 thread-mounting points on the back. The kits come with a small tripod for desktop use.
A Hands-On Perspective
I was able to check out both cameras with the latest, cross-platform, open-source Intel RealSense SDK 2.0. The toolkit comes with the RealSense Viewer (Fig. 3), which can be configured to display 2D and 3D video streams from the cameras. It’s an easy way to see the capabilities of each camera, as well as do some general testing to see which camera might best suit your needs. The program can show information from both sensors (RBG and depth), individually or in combination.
RealSense Viewer can record and playback RealSense camera depth-stream data files. These can be used by other applications using the SDK.
3. The RealSense Viewer application provides displays of 2D and 3D images from the cameras.
The Depth Quality Tool is used to test a camera’s depth quality, of course. More specifically, it checks the Z-accuracy of the system. It can check spatial noise, fill rate, and distance to target to see how it fares with physical measurements.
Both tools work out of the box once a camera is plugged in. The SDK is another matter. It includes a C/C++ and a Python programing interface that also has command-line debug applications. The SDK and tools can be downloaded from the GitHub RealSense project site. The kit supports earlier D400 cameras as well as the SR300. There’s also support for the Point Cloud Library (PCL), Node.js, the Robot Operating System (ROS), National Instrument’s LabVIEW, and Microsoft .NET. Linux, Microsoft Windows, and Apple’s Mac OS are all in play.
Over half a dozen examples are included with the SDK. In addition, links will take you to projects that use RealSense, such as the Simple Autonomous Wheeled Robot (SAWR) project. SAWR is based on ROS.
The cameras are small and compact, allowing them to be incorporated into projects such as robots that map their surroundings using simultaneous localization and mapping (SLAM) for navigation. They can also be used to prototype other embedded applications.
Overall, I found getting started with the new cameras was much easier using the latest SDK. Getting them to work with ROS and LabVIEW was a breeze, since most of the heavy coding is done within the SDK or example code. The cameras work nicely indoors and out, although I haven’t tested them extensively outdoors given the amount of snow we’ve had lately.
Now I just need to order a right-angle USB Type-C cable—the one I have tends to stick out.