MIT’s New Artificial Intelligence Algorithm Designs Soft Robots.

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the opposite hand, could also be ready to interact with people more safely or slip into tight spaces with ease. except for robots to reliably complete their programmed duties, they have to understand the whereabouts of all their body parts. That’s a tall task for a soft robot which will deform during a virtually infinite number of the way .

MIT researchers have developed an algorithm to assist engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to raised interact with its environment and complete assigned tasks. The advance may be a step toward the automation of robot design. “The system not only learns a given task, but also the way to best design the robot to unravel that task,” says Alexander Amini. “Sensor placement may be a very difficult problem to unravel . So, having this solution is extremely exciting.”

The research are going to be presented during April’s IEEE International Conference on Soft Robotics and can be published within the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT computing and AI Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.



Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots aren’t so tractable.

Soft-bodied robots are flexible and pliant — they typically feel more sort of a bouncy ball than a ball . “The main problem with soft robots is that they’re infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” that creates it tough to style a soft robot which will map the situation of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back to the robot’s control program. But the researchers wanted to make a soft robot untethered from external aid.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: what percentage sensors does one have, and where does one put those sensors so as to urge the foremost bang for your buck?” The team turned to deep learning for a solution .

The researchers developed a completely unique neural specification that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the foremost efficient sequence of movements to finish tasks, like gripping objects of various sizes. At an equivalent time, the network keeps track of which particles are used most frequently , and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the foremost important particles, the network also suggests where sensors should be placed on the robot to make sure efficient performance. for instance , during a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and round the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to control objects. While which will seem obvious, it seems the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. for 3 different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. and therefore the results weren’t close. “Our model vastly outperformed humans for every task, albeit I checked out a number of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It seems there are tons more subtleties during this problem than we initially expected.”

Spielberg says their work could help to automate the method of robot design. additionally to developing algorithms to regulate a robot’s movements, “we also got to believe how we’re getting to sensorize these robots, and the way which will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you would like a really robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”

“Automating the planning of sensorized soft robots is a crucial step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are a crucial aspect of the method , as they allow the soft robot to “see” and understand the planet and its relationship with the planet .”

Reference: “Co-Learning of Task and Sensor Placement for Soft Robotics” by Andrew Spielberg, Alexander Amini, Lillian Chin, Wojciech Matusik and Daniela Rus, 2 February 2021, IEEE Robotics and Automation Letters.
DOI: 10.1109/LRA.2021.3056369

This research was funded, in part, by the National Science Foundation and therefore the Fannie and John Hertz Foundation.