The folks on Capitol Hill and in the White House seem determined, one way or another, to reform immigration. The likely outcome will be tighter border control and fewer immigrant workers. This leaves farmers with a rather ugly dilemma: leave some fruit in the field to rot and raise produce prices-a trend that's happening now and scaring farmers-or find an alternative harvesting method.
Companies like Vision Robotics Corp. (VRC) are making the latter possible by developing an "intelligent" duo of robots. The "scout" is loaded with image sensors to map out and plan the harvesting, while the "harvester" uses multiple arms to pick delicate produce, such as oranges and grapes, quickly, efficiently, and economically (Fig. 1).
The first challenge the scout faces is in developing a complete understanding of its operating environment by mapping all fruit completely in three dimensions. The process, known as Simultaneous Localization and Mapping (SLAM), allows the robot to harvest with amazing speed and accuracy. The mapping enables the scout to develop a plan for each tree or bush and to guide the harvester, which can pick oranges at roughly 2.5 seconds per orange per each of its eight "hands."
Other robots, like the popular home vacuums, know how to map out an environment to plan the best route. The main difference is that homes have nicely defined corners and edges, and the robot is generally operating on a mostly level and even surface.
Yet farms tend to be planted in straight rows, so the process of mapping the entire farm is less important. The GPS positions of all of the farm's corners would be programmed into the scout, along with other parameters such as which sections to harvest on a given day, the size of the fruit to pick, and perhaps the sweetness and thickness of the skin in more advanced models.
The scout gets its visual input from Micron MT9V022177ATC stereoscopic cameras (CMOS infrared-based image sensors) and then analyzes the 3D SLAM data using algorithms running on processors like the Cell Broadband Engine (Fig. 2). With up to 16 pairs of image sensors, the robot would require approximately four Cell processors.
Each processor will run stereo image analysis by segmenting the images and matching their features. This is accomplished using several well-known image processing algorithms, including image correlation, edge segmentation with Bayesian filtering, and spatial matching. After processing, each pixel match is then used to calculate points in 3D that correspond to matched objects, and a 3D grid is born.
To keep costs down, VRC will use low-cost wide-angle lenses. The robot will compensate for distortions such as "fish eye," where objects toward the edges of the image appear to bow outward, using the appropriate image processing algorithm. For example, a camera calibration routine would move and smooth the pixels to correct fish-eye effects.
VRC's robots also will increase the dynamic range and use onboard infrared headlights to operate in light conditions from complete darkness to full sunlight. Even sunlight flowing through trees on a windy day won't slow down these robot farmers. So, a robot farmer could potentially harvest around the clock without breaks, unlike humans, who need food, water, and rest.
The robots are currently in the simulation phase. (About 90% of the software design is simulation.) VRC expects its robot farmers to be available in two to four years at an approximate price tag of $500,000.
Vision Robotics Corp.
visionrobotics.com