RoboCop Patrols Microsoft’s Parking Lot

Living in New York, everyone is focused on crime.  We event have a data bank of daily crime called, COMPSTAT.   However, recent waves of bad policing has made cops embrace new technologies to protect themselves from public scrutiny.  The question is not if will robots replace meter maids, traffic cops and other tertiary policing, but when…

As an example, earlier this week Microsoft hired high-tech security guards to parole the streets, protecting people in California’s Silicon Valley. The Knightscope K5 robots are fighting crime with lasers, GPS and heat-detecting technology. The R2-D2 look alikes patrol autonomously in the set perimeter and record activity around its path.

Providing a “commanding but friendly presence,” the robots were hired to intimidate potential criminals. The new high-tech security guards have the ability to catch a criminal red-handed. “The robot is looking at the video, listening for glass breakage, any loud sound that breaking in would cause,” Knightscope co-founder Stacy Stephens says. “We’ll get the license plate, picture of the vehicle, geotag location, and time.”

The robots stand five-feet-tall and weigh 300 pounds. The robots have laser scanners, a thermal imaging system and the ability to read 300 car license plates in a minute. Its 360-degree HD  surveillance camera streams live video to a command center. And yes, the robots can also detect odors.

But these real-life R2-D2s are not equipped with weapons, and can only contact human security officers to the scene after sounding off an alarm when something goes wrong.  The robots last an entire day with just one charge. When the robot notices its battery is low, it will plus itself into a charger for 20 minutes until it is fully charged. Silicon Valley’s computer giant Microsoft have already stationed four robot security guards on its campus. Ideal for college campuses, malls, or other busy outdoors areas, Knightscope says that approximately four dozen companies remain on a waiting list for the K5 robots.

K5 is not unique but part of new wave of robotic recruits for the policing of the future, and you thought it was just a movie.

Robot Boldly Goes Where No Thing Has Gone Before…

The biggest news to hit the robotic sphere this week are Rosetta & Philae the European robot comet voyagers.  This dynamic duo which has been led by the ESA, the EU’s equivalent to NASA, is a tag team effort to better understand the Universe through attaching scientific instruments to moving comets.   Below are some of the first videos and images taken by the robots (note: Rosetta hovers above while Philae attaches to the comet ice rock below).

 The first picture below is from the lander separating from its “mothership”, Rosetta.  The mission success will determine on Philae’s ability to analyze the icy rock of the comet to better understand the formation of our Solar System.  Most scientist believe that comets contain the oldest material known in our Solar System.

First image
Success! Philae landed just missing the spiral above that would have destroyed the spacecraft:
1415815553471_wps_10_The_picture_taken_with_th
In addition to the spiky rock particles, challenges to the mission include a very low gravity on the 4km-wide ice mountain. Philae could simply bounce back to space, as its foot screws and harpoons that fasten it into position failed to deploy.  Mission command said they might use its exploration drill as a back up.

During the mission, which started yesterday afternoon, Philae will take a pictures of its surroundings like the one above and collect data on early water particles.  Figures crossed as its outcome is highly uncertain.  Rather than breaking down the parts composed in these voyagers, we have posted below the schematics that can be enlarged by clicking on the image.  This is one small step for Philae, one giant step for robotkind.

1415815553460_wps_9_12N_ROSETTA_LANDING_ONLIN

1415830508005_wps_57_PHILEA_LANDER_Rosetta_spa

The Fantastic Micro-bot Voyage

My daughter is a physics geek.  As a robot commentator the apple doesn’t fall far from the tree.  At the end of the day, our ability to enter new spheres of spaces depends on the propulsion of our machines.  I have always been fascinated by the 1968 science fiction movie, The Fantastic Voyage, and our ability to navigate the smallest of molecules within the living body.  In this remarkable age,  Otto Klement’s and Jerome Bixby’s story now looks like a premonition…

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany have engineered a “robotic-scallop” that is only a fraction of a millimeter in size and that is capable of swimming in biomedically relevant fluids (such a blood, sweat and tears).  Designing robots on the micro or nano scale (like, small enough to fit inside your body) is all about simplicity. There just isn’t room for complex motors or actuation systems. There’s barely room for any electronics whatsoever, not to mention batteries, which is why robots that can swim inside your bloodstream or zip around your eyeballs are often driven by magnetic fields. However, magnetic fields drag around anything and everything that happens to be magnetic, so in general, they’re best for controlling just one single microrobot robot at a time. Ideally, you’d want robots that can swim all by themselves, and a robotic micro-scallop, announced last week in Nature Communications, could be the answer.

When we’re thinking about robotic microswimmers motion, the place to start is with understanding how fluids (specifically, biological fluids) work at very small scales. Blood doesn’t behave like water does, in that blood is what’s called a non-Newtonian fluid. All that this means is that blood behaves differently (it changes viscosity, becoming thicker or thinner) depending on how much force you’re exerting on it. The classic example of a non-Newtonian fluid is oobleck, which you can make yourself by mixing one part water with two parts corn starch. Oobleck acts like a liquid until you exert a bunch of force on it (say, by rapidly trying to push your hand into it), at which point its viscosity increases to the point where it’s nearly solid.

These non-Newtonian fluids represent most of the liquid stuff that you have going on in your body (blood, joint fluid, eyeball goo, etc), which, while it sounds like it would be more complicated to swim through, is actually anopportunity for robots. Here’s why:

At very small scales, robotic actuators tend to be simplistic and reciprocal. That is, they move back and forth, as opposed to around and around, like you’d see with a traditional motor. In water (or another Newtonian fluid), it’s hard to make a simple swimming robot out of reciprocal motions, because the back and forth motion exerts the same amount of force in both directions, and the robot just moves forward a little, and backward a little, over and over. Biological microorganisms generally do not use reciprocal motions to get around in fluids for this exact reason, instead relying on nonreciprocal motions of flagella and cilia.

However, if we’re dealing with a non-Newtonian fluid, this rule (no joke, it’s actually a theorem called the Scallop theorem) doesn’t apply anymore, meaning that it should be possible to use reciprocal movements to get around. A team of researchers led by Prof. Peer Fischer at the Max Planck Institute for Intelligent Systems,  have figured out how, and appropriately enough, it’s a microscopic robot that’s based on the scallop.

As shown in the above video, these robots are true swimmers. This particular version is powered by an external magnetic field, but it’s just providing energy input, not dragging the robot around directly as other microbots do. And there are plenty of kinds of micro-scale reciprocal actuators that could be used, like piezoelectrics, bimetal strips, shape memory alloys, or heat or light-actuated polymers. There’s lots of design optimizations that can be made as well, like making the micro-scallop more streamlined or “optimizing its surface morphology,” whatever that means.

The researchers say that the micro-scallop is more of a “general scheme” for micro-robots rather than a specific micro-robot that’s intended to do anything in particular. It’ll be interesting to see how this design evolves, hopefully to something that you can inject into yourself to fix everything that could ever be wrong with you. Ever.

I have written about nano-robotics previously, wether google’s human project or new robotic contact lenses (see articles), however the Max Planck Institute’s novel motion technique could be a game changer in the same way soft robotics is revolutionizing mobile space.   Rev up those interspace engines, as the next frontier could be you!

Robot on Aisle Four

Finding a light bulb in Home Depot is a very lonely experience, as you are often alone wondering how to convert conventional wattage into LED amps.  I say lonely as the orange apron is no where to be found.  Lowe’s thinks it has found a way to be competitive in the retail landscape in time for the holiday season.

 Meet OSHbot the newest employees of Lowe’s Orchard Supply Hardware Co. store in San Jose, California, where the first robots will be located.  Lowe’s Customers will be able to communicate their needs verbally or by selecting items from a touch-screen menu displayed on OSHbot. Customers can also show OSHbot what they are looking for by displaying an item, such as a screw, in front of the robot’s 3D scanner.  OSHbot will then immediately report on whether the item is in stock, and will lead customers through the store to locate it.

“Retail really hasn’t changed much in the last couple hundred years,” Kyle Nel, executive director of Lowe’s Innovation Lab, says in the video. “The robots are the first thing that can really change the customer experience.”

The robot uses the same sensors as Google’s driverless cars to avoid collisions and every night it updates its map of the store’s inventory.

OSHbot

Nel told Ad Age that the robots aren’t meant to replace retail workers.”What our sales associates are amazing at doing and what they love spending time on are consulting and helping customers with their projects and solving their problems,” Nel said. “We can let the robots answer questions like, ‘where are the hammers?’”

Lowe’s worked with Singularity University and a startup called Fellow Robots to make the customer service machines a reality.  However, Lowe’s hasn’t said whether it will expand the robot program to other stores, so I will have to wait until it arrives in New York.  As we enter a more robotic age, sales associates have to now compete not only for sales per square foot but product knowledge with an ever growing database.  I wonder when will they wear orange aprons?

Curiosity Saved The Robot

Due to several cyber attacks, RobotRabbi was down intermittently a few weeks.  We are now back, more stable and safer than ever.  We are excited to share with you a new design for RobotRabbi.  You will notice many new features to share and sign up for new posting.  As we begin anew, it makes me think about robots in their own infancy.

At a recent TEDx Talk in Cannes France, Pierre-Yves Oudeyer presented a novel concept of using robots to better understand the ways newborn mechanisms learn (see above). As any parent knows, a child’s brains and hands are used to fabricate and model the things a baby wants to better understand.  Play is critical to development during a time when the brain has the greatest plasticity.  Oudeyer explains that scientists also use fabrication to build new knowledge of the world around us. Scientists build large scale aquariums to understand ocean behavior and construct large computer simulations to understand spiral galaxies.

The idea of fabricating baby robots and providing them with the tools to learn is the central idea of this talk. Robots are given the tools to create their own experiments and exhibit forms of cognitive development. Curiosity is a large focus of robot learning. Pierre-Yves explains that children use curiosity as a learning mechanism but they do it in a very structured way. His team built robots that could learn, discover, and set their own goals.

The first experiment shown has two robots with quadraped bodies in a playground environment. Actions are performed, effects are logged into the internal database, and the robot tries to detect irregularities in the effects. This gives the robots the ability to make predictions about future states.  Robot brains choose experiments that they think will provide the most progress in their internal predictive algorithms. This allows the bots to gain new skills but also brings in a self-organization that occurs between the robot and its environment.  Eventually each robot creates a system of vocal interaction with the other robots. In the video it sounds like a cross between puppy and kitten mewling, and the sounds are transmitted between bots until they’re all making the same sounds.

Oudeyer emphasizes the link between the robots learning, curiosity, the robot body and the environment. He says that changing the body but keeping the same learning mechanism will create different cognitive learning stages, in a different order. The entire talk is fascinating and covers not just learning but also communication and languages.  At the end of the talk Oudeyer introduces Poppy, the open source 3d printed humanoid robot (similar to Intel’s Jimmy, see my previous post). He says that this robot will allow every lab, school or fabrication location to join the scientific exploration of robotic learning. Poppy is worth a few articles in and of itself, just on the immense scale of the project and the effort and skill required to build and program one of the bots.

As we look to robot learning for a paradigms of our own human development, the converse is also true.  Robots will eventually outgrow the lab and will be self-learning modules specific to their tasks gaining expertise with experience.

%d bloggers like this: