Most robots achieve grasping and tactile sensing through motorized means, which can be excessively bulky and rigid. A Cornell group has devised a way for a soft robot to feel its surroundings internally, in much the same way humans do.
Before a robot arm can reach into a tight space or pick up a delicate object, the robot needs to know precisely where its hand is. Researchers at Carnegie Mellon University’s Robotics Institute have shown that a camera attached to the robot’s hand can rapidly create a 3-D model of its environment and also locate the hand within that 3-D world.
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
The simple task of picking something up is not as easy as it seems. Not for a robot, at least. Roboticists aim to develop a robot that can pick up anything—but today most robots perform “blind grasping,” where they’re dedicated to picking up an object from the same location every time. If anything changes, such as the shape, texture, or location of the object, the robot won’t know how to respond, and the grasp attempt will most likely fail.
An engineering team led by Robert Shepherd of Cornell University, USA, has fashioned photonic strain sensors out of easy-to-fabricate elastomer waveguides, and used them to provide fine-scale tactile feedback to a soft, flexible prosthetic hand (Sci. Robot., doi: 10.1126/scirobotics.aai7529). The waveguides, capable of sensing textural differences at the micrometer scale, enabled the hand to accomplish a number of tasks—including picking the ripest tomato from a group of three in multiple trials.