Rambus is developing a tiny lensless imaging system that covers a typical photodiode array with a specially designed diffraction grating. The diffraction approach eliminates the lens and support structure used with a refraction-based camera, but the data does require significantly more preprocessing to obtain an image (Fig. 1). Luckily, this can be done using a conventional DSP or GPU.
- PoE/USB3 Camera Controllers Enrich Machine Vision
- Dual Camera Module Focuses Images After Taking Photos
- FPGA-Based Machine Vision Camera from Lattice Semiconductor and Helion GmbH
The diffraction-based imaging system trades off mechanical complexity and cost for computational complexity. At this point there is a resolution limitation. The current sensor provides a 128- by 128-pixel image with a 100-degree view. The resolution is not likely to grow significantly, and telephoto operation is not applicable because there is no lens involved.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
Part of this limitation is also based on the diffraction approach as well as the diffraction grating. In particular, the diffraction grating needs to work with the range of visible light. Still, many insects and other animals function with this level of resolution. Similarly, many proximity-sensing applications can operate without high-resolution images.
Essentially the system uses the diffraction grating to spread incoming light across the sensor array. This means that light from a source will hit multiple pixels. The software then extracts the image by reducing the complex mapping imposed by the diffraction grating. The spiral architecture then comes into play because the algorithms know about its structure so it can decompose the captured data into the original image.
Compare this to a refraction approach where the lens focuses light from a source onto adjacent pixel sensors. In this case it is a simple matter of dumping the sensor data to obtain the image. The advantage is the simple sensor implementation and minimal data-packing overhead compared to some complex calculations for the new diffraction approach. The lens-based system scales well but requires more expensive glass optics to deliver high-quality images.
The diffraction approach won’t be replacing the technology in digital cameras anytime soon, but it does have advantages that make it an ideal sensing technology, especially for systems designed for the Internet of Things (IoT). For example, tasks like motion sensing and basic object position sensing require quite a bit of computation to analyze an image. This would take more time with a diffraction system if the image was recreated first, but tasks like this are easier to accomplish using the raw data from the sensor array. Changes from a region being viewed will show up as changes in one or more pixels on the sensor array, so it is possible to have a “tripwire” of a few pixels instead of analyzing the whole image to detect the change. This significantly reduces processing requirements and, hence, power consumption.
The system also makes tracking of angular movement between images easier since it can track a fractional pixel movement because the incoming light is spread across the imager. Rambus estimates that the sensor has 1 arc minute of accuracy. This could be useful for a fiducial LED tracking system that might be found on a virtual reality helmet.
For many applications, just knowing about the change is sufficient. Likewise, the change in angle to detect movement is something else that can be easier using the raw data rather than the processing normally required for two sequential images.
A simple implementation is another advantage of the diffraction approach. Initial tests were performed with a commercial array and a custom diffraction grating but it should be possible to fabricate the system on-chip using an additional step to the process (Fig. 2). Melding the system construction into the chip creation process has a number of advantages, from elimination of alignment issues that a lens requires to lowering system costs.
Moving the overall imaging system down to the chip level can lead to some interesting applications. For example, including multiple sensors on the same chip or integrating the sensor on the same chip as the supporting DSP opens up interesting possibilities. The test grating was designed for the visible spectrum, but diffraction gratings that span a narrower spectrum are easier to make. The overall cost of a sensor is a fraction of a refraction-based solution, so spreading a dozen sensors around a system could well be a practical and useful alternative.
Tiny Rambus Lensless Camera Could be Added to Any Electronic Device
Technology Editor Bill Wong interviews Patrick Gill, Senior Research Scientist, about RAMBUS' lensless camera system. Click to view the video.