Robots Lending A Hand

On Monday, Uber launched its fleet of autonomous cars (Ford Fusions) in Pittsburgh, PA. The test represents Uber Chief Executive Travis Kalanick’s audacious vision to one day roll out an entire fleet of autonomous vehicles to replace the company’s roughly 1.5 million drivers and to ferry commuters, packages and food around urban centers. In the meantime, Uber is turning Pittsburgh into an experimental lab, summoning the public to participate before any laws have been written. Uber invited up to 1,000 of its “most loyal” Pittsburgh customers to experience the futuristic vehicles in the first U.S. real-world test of self-driving cars for regular people. This represents the first public test, using human beings as guinea pigs. Let’s see how they do without the wheel…

uber

Also on Monday, Japanese UAV manufacturer Prodrone launched the first commercially available dual robot arm (claw-like) drone. Its talons lifts over 40 pounds of cargo, and is envisioned to utilized for marine and terrestrial rescue missions. However, it is limited to distances within 30 minutes at a range of 37 mph.

According to Prodrone’s CEO, Masakazu Kono, “It can work in high places, collect dangerous materials, and ship cargo to remote areas…a sophisticated algorithm maintains the drone’s stability as its center of gravity shifts when using the robot arms.”

The company anticipates that in the future it will be able to do more ‘hands-on’ tasks, such as cutting cables, turning dials, flicking switches, and possibly other jobs just too dangerous, dirty or dull for humans.  Kono further exclaims, “We are firmly focused on the future of commercial drones and on being world pioneers in developing task-oriented drones.”

While the video is very impressive, it does evoke a horror film image of a modern day remake of Hitchcock’s The Birds (“The Drones”). 

Hands are hard. Robots have many strong suits, but delicacy traditionally hasn’t been one of them. Rigid limbs and digits make it difficult for them to grasp, hold, and manipulate a range of everyday objects without dropping or crushing them. Last year, MIT researchers discovered how to utilize air-fused silicone to best handle delicate objects, like tomatoes or eggs. This gave birth to a new industry of modular grippers known as soft robotics for agriculture, warehousing and manufacturing.  

According to MIT Professor Daniela Rus, “Robots are often limited in what they can do because of how hard it is to interact with objects of different sizes and materials. Grasping is an important step in being able to do useful tasks; with this work we set out to develop both the soft hands and the supporting control and planning systems that make dynamic grasping possible.”

Roboticists, like Rus, aim to develop a robot that can pick up anything—but today most robots perform “blind grasping,” where they’re dedicated to picking up an object from the same location every time. If anything changes, such as the shape, texture, or location of the object, the robot won’t know how to respond, and the grasp attempt will most likely fail.

Robots are still a long way off from being able to grasp any object perfectly on their first attempt, even soft robots. Why do grasping tasks pose such a difficult problem? Well, when people try to grasp something they use a combination of senses, the primary ones being visual and tactile. But so far, most attempts at solving the grasping problem have focused on using vision alone.

Professor Vincent Duchaine of École de Technologie Supérieure (ÉTS) in Montreal, Canada wrote in the IEEE Spectrum Magazine this Summer that “the current focus on robotic vision is unlikely to enable perfect grasping. In addition to vision, the future of robotic grasping requires something else: tactile intelligence.”

He compares the task to how Steven Pinker’s famous discourse, How the Mind Works  describes all the things the human sense of touch accomplishes: “Think of lifting a milk carton. Too loose a grasp, and you drop it; too tight, and you crush it; and with some gentle rocking, you can even use the tugging on your fingertips as a gauge of how much milk is inside!” Duchaine states that because robots lack these sensing capabilities, they still lag far behind humans when it comes to even the simplest pick-and-place tasks.

So far, most of the research in robotic grasping has aimed at building intelligence around visual feedback. One way of doing so is through database image matching, which is the method used in the Million Objects Challenge at Brown’s Humans to Robots Lab. The idea is for the robot to use a camera to detect the target object and monitor its own movements as it attempts to grasp the object. While doing so, the robot compares the real-time visual information with 3D image scans stored in the database. Once the robot finds a match, it can find the right algorithm for its current situation.

Other researchers have turned to machine learning techniques for improving robotic grasping. Google recently conducted an experiment in grasping technology that combined a vision system with machine learning. Google’s biggest breakthrough was in showing how robots could teach themselves—using a deep convolutional neural network, a vision system, and a lot of data (800,000+ grasp attempts)—to improve based on what they learned from past experiences. Google proved that it is possible to teach robots without pre-programmed responses, and this enters a new frontier of AI (deep learning) and mechanical engineering.

As Pinker states, the sense of touch plays a central role for humans during grasping and manipulation tasks. For amputees that have lost their hands, one of the biggest sources of frustration is the inability to sense what they’re touching while using prosthetic devices. Without the sense of touch, the amputees have to pay close visual attention during grasping and manipulation tasks, whereas a non-amputee could pick something up without even looking at it.

Duchaine claims that the real breakthrough will come by teaching robots tactile intelligence to better predict grasp sensing. Researchers are aware of the crucial role that tactile sensors play in grasping, and the past 30 years have seen many attempts at building a tactile sensor that replicates the human apparatus. However, the signals sent by a tactile sensor are complex and of high dimension, and adding sensors to a robotic hand often doesn’t directly translate into improved grasping capabilities.

At his lab, Duchaine is building tactile intelligence by a machine learning algorithm that uses “pressure images to predict successful and failed grasps.”

Robotiq gripper and various objects from Amazon picking challenge

In the above example, Duchaine customized a robotic hand with several multimodal tactile sensors to enable the robot to pick up a variety of objects and learn by collecting the grasp inputs into a neural network database. This translated into a system that was able to predict grasping with an 80% accuracy.

img

In another example, they focused on slippage detection similar to how we humans quickly recognize when an object is slipping out of our fingers due to our inherent receptors on our skin that detects rapid changes in pressure and vibration. The researchers were able to input human hand slippage vibration images (spectrograms) into the database.  The same robot then used the machine learning algorithm to predict object slippage with a 90% accuracy.

Now Duchaine’s team is testing ways to have the coding algorithm automatically update itself, so that each grasp attempt will help the robot learn how to make better predictions. Long-term, the robot will be able to use this information to adjust its behavior during the grasp function. According to ETS’s research, it is not enough to have better grippers (like Rus claims) or more robust computer vision sensors (such as Brown’s study), the glue that literally holds everything together is developing a neural network of “tactile sensing” in order for a robot to pick up any object (even my beer) without spilling it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: