On a stormy night in Manhattan, children gathered in the recently opened MoMath (Museum of Math) to see the “Robot Swarm,” called “Swarm Bots” by the Multi-Robot Systems Lab (MRSL). Before letting the robots loose, James McLurkin, assistant professor of computer science at Rice University and roboticist-in-residence at MoMath, gave a presentation on the metamorphosis of these robots. Though the robot exhibit will officially open this winter at MoMath, visitors were offered a sneak preview at the work in progress.
McLurkin’s idea of robots working together can be traced to the evolution of ants, bees, wasps, and termites and how they perform tasks. The robots are assigned a command, and follow the “reference robot” to fulfill it. The lab created many different algorithms for the bots, including “match orientation,” “follow the leader,” and “cluster into groups.”
In match orientation, the reference robot (blue light) is the leader, and the active robots (red light) follow. Follow the leader, on the other hand, is a constant “handshaking” process between robots to form a line. If one doesn’t respond, another can be recruited to take its place. For the “cluster in groups” algorithm, demonstrated at the event, groups of robots each select a leader (red, green, and blue lights). Once the groups are formed, each cluster then moves away en masse from the others.
Lying at the core of the robots is an inter-robot communication and localization system. The robots are built with hardware designed to provide local network geometry, which includes network connectivity and local pose estimation of neighboring robots. In addition, the robots measure distance through four receivers that prevent them from bumping into one another.
The local network geometry model is a compromise between two other models. The first model assumes that there’s either only the range between robots or a global coordinate system, while the second is a range-only model requiring extensive computation.
McLurkin has co-authored many papers on the subject. In his 2013 paper, “Scale-free coordinates for multi-robot systems with bearing-only sensors,” his research team explains what they hope to accomplish with the local network geometry:
“Scale-free coordinates allow each robot to know, up to scaling, the relative position and orientation of other robots in the network. We consider a weak sensing model where each robot is only capable of measuring the angle, relative to its own heading, to each of its neighbors.”
The robots, which run on a CPU ARM Cortex-M3, are not in final form as of yet. The photos show the robots as square units with much exposed hardware. However, when they are finally set up in the museum, they will be encased in a white shell to make them visually more appealing and easier for children to handle. The robots were built during McLurkin’s tenure at iRobot Corp.