Due to their parallel nature, field-programmable gate arrays (FPGAs) have the potential to increase computational bandwidth in systems where a majority of the processing functions are performed by computer software. A research project funded by the Montana Department of Commerce is planning to leverage FPGA-based hardware to increase the frame-rate and resolution of hyperspectral cameras.
Hyperspectral imaging refers to the measurement of hundreds of individual colors within a pixel, instead of the typical three colors that conventional cameras are capable of identifying. The hyperspectral cameras used in the research project were developed by Resonon Inc., an imaging company based out of Bozeman, Mont. These cameras are typically used in sorting foods on robotic assembly lines, remote sensing, environmental monitoring, and agricultural mapping.
In current hyperspectral cameras, image data travels over a cable into a computer processor, where algorithms convert it into a full hyperspectral image. This approach, according to Resonon president Rand Swanson, limits real-time data processing to a fraction of what can be accessed by hyperspectral cameras. And that could be an issue for machine vision technologies using hyperspectral imaging. In robotic assembly lines that sort almonds, for example, there are “only a few seconds between the time the hyperspectral data are collected and when algorithms need to convert the data into useful information to direct the actuators,” Swanson said in a recent statement to Photonics.
In an attempt to increase the latent capabilities of these systems, an academic team from Montana State University is translating the computer software used to process the image data into multiple parallel streaming processes in FPGA hardware. The FPGA hardware, which is being programmed in the C language, will offload processing functions to the newly embedded processor within the camera, allowing the hyperspectral images to be accessed in real-time.
The potential benefits of an FPGA-based design for image processing are significant, according to research from the university. Resonon’s hyperspectral cameras currently have a spatial resolution of 640 pixels, 240 colors, and can take photographs at a frame-rate of 140 frames per second. An FPGA-based system would be capable of 2048-pixel resolution, 512 colors, and a frame rate of 340 frames per second.
These capabilities would enable the cameras to point out smaller details and classify finer gradients of color. In addition, the improved frame rate will enable sorting and inspection to be done much faster. The research team is using an optimizing compiler and other support provided by Impulse Accelerated Technologies to translate the software into an FPGA format.
This research, said Brian Durwood, chief operating officer at Impulse Accelerated, can be applied in other areas outside food sorting and mapping applications. He said that the technology could be used in mission-critical applications, such as search-and-rescue missions or disaster relief, noting that “the same algorithmic architecture that visually sorts shells from almonds can also be used to quickly scan the sea and ‘sort out’ a person overboard as differentiated from the surrounding sea.”