RSS

Motion planning with OpenRave: an IPython notebook

Base pose planning with OpenRave

Disclaimer:

This is my first attempt at a tutorial using IPython notebooks. This is made all too complicated by the fact that I am using OpenRave that launches an external process with th viewer. To create this post I used nbconvert then imported the resulting html in WordPress. The result is not very “bloggy” but it’s kind of cool. I am open to suggestions for how to embed an IPython notebook into WordPress. The same notebook can be viewed via nbviewer.

A useful tool in robotics is to find base poses from where an object can be reached. This is not an easy problem and it is the subject of ongoing research from groups around the world. Below is my simple take at it: if you know up something about the the world’s geometry (tables, objects, planes), then you can look for feasible solutions. Searching can be done in several different ways, but the approach I’m using here is as simple as effective: sample. This boils down to generating random solutions until one looks good. It seems crude but it does the job and it can be used as the starting point for more complex methods.

For those of you more into robotics, you will see an obvious parallel with Rapidly-exploring Random Trees (RRTs), or in general stochastic motion planning.

This code assumes you have downloaded the helping routines from my Github repository. You don’t need the whole lot (which is not much at the time of writing, but hopefully it will grow larger), just the files generate_reaching_poses navigation and utils.

Below is a video that shows the execution of exactly the same code as in this notebook, so you can see the effects without trying.

In [4]:
from IPython.display import YouTubeVideo
YouTubeVideo('o-sQ4nlPmVU', width=853, height=480)
Out [4]:

The first thing is to load the environment and the viewer. I am using a standard room provided by the OpenRave distribution.

In [1]:
import openravepy; 
import numpy as np
env = openravepy.Environment()
env.SetViewer('qtcoin')
env.Load('data/pr2test2.env.xml');

Then load my code and select the object to grasp and the surface where to place it.

In [5]:
%cd /home/pezzotto/Projects/OpenRaving #change this to the folder where you have placed your code.
import generate_reaching_poses
import utils
robot = env.GetRobots()[0]; manip = robot.SetActiveManipulator('rightarm')
obj = env.GetKinBody('TibitsBox1')
table = env.GetKinBody('Table1')
/home/pezzotto/Projects/OpenRaving

It’s always a good idea to fold the arms before moving.

In [6]:
utils.pr2_tuck_arm(robot)

Now come the first important part. First thing is to generate grasping positions for the gripper (this can be a long process). Then find a base position from where the gripper pose admits an IK solution, and which is collision free. Then plan a base trajectory to that position and executes it.

In [7]:
pose, sol, torso = generate_reaching_poses.get_collision_free_grasping_pose(robot, obj, 300, use_general_grasps=False)
import navigation; planner = navigation.SimpleNavigationPlanning(robot)
planner.performNavigationPlanning(pose, execute=True)
robot.GetController().Reset()

Once the robot has reached the base pose, lift the torso to the height found above. We are going to use motion planning to move the gripper in position, just to show some nice arm movements.

In [8]:
robot.SetTransform(pose)
robot.SetDOFValues([torso], [robot.GetJointIndex('torso_lift_joint')],)

Here motion planning kicks in. Give it some time.

In [9]:
mplanner = openravepy.interfaces.BaseManipulation(robot)
robot.SetActiveDOFs(manip.GetArmIndices()); 
mplanner.MoveActiveJoints(sol)
robot.WaitForController(0);

Ok, time to grab the object. We don’t deal with real grasping here (the objects are just bounding boxes anyway).

In [10]:
robot.GetController().Reset()
robot.Grab(obj)
utils.pr2_tuck_arm(robot)

We are almost done. The final step is to find a free spot on a table, move there and place the object down. Finding a free spot on a table is very similar to finding a grasping position, but this time instead of checking grasps we check reachability of a random point on a surface. After having found the object we just teleport the robot, we already checked that motion planning works.

In [11]:
pose, sol, torso = generate_reaching_poses.get_collision_free_surface_pose(robot, table, 100)
robot.SetTransform(pose)
robot.SetDOFValues(sol, robot.GetActiveManipulator().GetArmIndices())
robot.SetDOFValues([torso], [robot.GetJointIndex('torso_lift_joint')],)
robot.Release(obj)
utils.pr2_tuck_arm(robot)

That’s it! quite crude but effective!

 
Leave a comment

Posted by on November 2, 2012 in Discussion

 

A New Challenge for Robotics

It looks like yesterday when I had just started my PhD and I was looking with awe at the first DARPA Grand Challenge (2005). Seeing cars race in the desert with no driver, knowing that the world was being changed before my eyes. It wasn’t the development of particularly new technologies, but showing that research was out of the labs and into the field. It made history. It prompted me to focus more on real robotics. And now it is happening again.

In the past couple of years Boston Dynamics has shown the world that robots don’t necessarily need wheels, but they can walk on impervious terrain using four legs, or even two. But it is not only about making them stand. These robots have to do stuff using common tools, like driving a truck, closing a valve or using a drill. They won’t have the great stability four wheels provide, or the capability of carrying a heavy payload packed with sensors and computational power. The lack of precision in motion will have to be compensated with sensing. And a novel inclusion of a human operator in the loop.

Meet Atlas, the new guy that is going to change the way robots will work alongside humans.

Image

It won’t be alone, as seven teams are building their own hardware to compete for the first prize.

So what is this challenge about? I have been lucky enough to get a virtual seat at the kickoff meeting. The details are not final, but the main idea is that teams from all around the world will compete to create a robot that can be deployed in a disaster-stricken area, possibly inspired by Fukushima, to perform tasks too dangerous for humans. It is not about being a camera-on-wheels system, but a robot that can perform actions in a semi-supervised way. If this works the technology will change the way manufacturing is done (like Baxter, but in some other way), and it will create a new huge boost for robotics and its deployment in the real world.

There has been a lot of talking about the challenge that I am not going to repeat here. Here is what I think will be the main obstacles towards solving the challenge:

  • Perception: Identifying items that are either usable by the robot (tools, valves, trucks) or that are an obstacle towards a goal (rubble blocking a door).
  • Locomotion: Moving on an uneven terrain. Entering or exiting a vehicle. Climbing a ladder.
  • Communication: Performing actions with little or no supervision from the operator, given the communications constraints a disaster environment imposes.
  • Robustness: It’s not about having a perfect algorithm to solve a problem, but to be able to adapt and cope with environments and situations that in no way could have been foreseen and accounted for when programming the robot.
  • Integration: A lot of components and ideas will merge and fight to control the robot, and they will call for a right arbitration for the overall system to be functional.

Many more obstacles will need to be overcome. People will work days and nights to solve waves of problems. There will be last-minute rushes and hacky solutions. The end result might look like the one folks at Drexel University have nicely illustrated in the following video.

Good luck to all the teams, PIs, scientists and engineers competing to make the world a better place!

 
Leave a comment

Posted by on October 28, 2012 in Discussion, News

 

Tags: , ,

5/4 time, Jazz, David Brubeck and… Radiohead!

Disclaimer: I know I called this blog “Fantastic Machines”, and I promised I would talk about robotics, science, programming or related topics, but this is too mind-blowing to ignore.

Just by chance I stumbled upon this post by John Cook. Music and mathematics share a lot, and he does a good job at explaining the connections.

Here are two artists I love: Radiohead and Dave Brubeck. They play different music (although you could argue that Radiohead are inspired by jazz, see for example Pyramid Song). The latest album by Radiohead, which I personally don’t like, sounds more like dance than jazz. However somebody spotted some similarities between Take Five (Brubeck) and 15 Step (Radiohead). If you put them together you get the result below.

Crazy, isn’t it? By the way, you should listen to Brubeck’s Take Five alone, and Blue Rondo a la Turk, both in the great album Time Out.

You might be wondering what this has to do with robotics… Well Jazz and robotics are very well fit, as shown in the following video :D (the soundtrack is Deckchair, by Acoustic Ladyland).

 
Leave a comment

Posted by on October 19, 2012 in Discussion

 

Tags: ,

What Baxter Means for Research in Robotics

Short story: awesome! You can keep reading now if you want to know why I think so.

Today I was listening to an interview of Rodney Brooks speaking about Baxter. When I saw it featured on IEEE Spectrum I thought: “Cool, let’s see where it goes”. But listening to Brooks describing his creature gives you a different perspective.

Take a decades old task, like automatic assembly. Take a new technology like learning from demonstration. Then show the world that research can go out of the labs and change people’s life. Isn’t that easy?

No it isn’t. I haven’t seen Baxter in action but I bet there are a lot of hacks and assumptions that make it do a proper job. But that’s reasonable, even more, welcome. Most of the papers you’ll read in robotics start with a sentence along the line of:

We need robots  capable of learning from a non-expert to be usable in the real world.

And then it fires up equations, data collection, proofs and lab tests. However Rodney Brooks does something that he’s done in the past, actually he’s built his career around it: he does for real what others only discuss in papers and labs.

Don’t take me wrong, I’m not one more voice saying that research in University should be more application-focused and less theoretical. Baxter is build upon the research people in Universities around the world have done over the past years. Robotics, manipulation, computer vision, they all share the prize here.

This is a praise to all my colleagues who have worked hard and who never believed their research would make a difference. It takes a collective effort to change the world.

And a single mind who figures how to make money out of it.

 
3 Comments

Posted by on October 16, 2012 in Discussion, Results

 

Tags: , ,

A Short Account of Wrapping C++ in Python

For the AI-challenge competition I am participating to, I found that path planning (using A*) was (obviously!) the  slowest part of my Python code.  It turned out that 80% of the computation was spent planning!

So I decided to find some good C++ code, wrap it in  Python and give it a go. The code I decided to use is MicroPather, which is incredibly easy to use and it seems to be pretty fast.  One only has to cope with passing void* around, and everything is easy.

I had heard a lot of good things from Cython, so I decided to give it a try. Alas, although the new support for classes seems promising, wrapping an already existing class and playing with polymorphism proved very hard.

To cut a long story short, I reverted to the old and  good Boost Python. It took me no time at all! Another proof that sticking to old approaches is still the best solution.

BTW, it’s incredible how an old and seldom updated library like boost::python is still so incredibly good by today’s standards and changing APIs!

Next plan: include python hooks in the c++ A* code, to the best of versatility!

 
Leave a comment

Posted by on November 9, 2011 in Programming

 

Tags: , , ,

The Complex Dance of a Prey

During my research on Emergent Behaviours I’ve been often questioned about the usefulness of something that you can’t control, you can’t predict and seems to be totally random. I’m not going through these points here, I’ll let the future talk about the past.

However, scientifically speaking, I need to prove a few points. In particular I was looking for answers to two questions:

  • Can performance improve when allowing emergent behaviours?
  • Is an emergent behaviour an expression of randomness?

To answer these questions I set-up a Predator-Prey scenario, where the predator had all the advantages, namely

  • It is faster than the prey
  • It is programmed by me

Armed with the python library PyEvolve I developed several preys without success. This went on until I decided to blend complexity and goal-pursuing. In other words I trained two recurrent neural networks, one to be complex and the other one to avoid the predator. A third network mixes the twos. The resulting prey speaks by itself:

This proved that complexity, the fertile ground for emergent behaviours, can improve the performance of a goal oriented system.

To answer the second question, I replaced the complex module with a random number generator. The result was awful: the prey could not survive for more than 40s !

Complexity (according to Kolmogorov, is not random. The chaotic processes have laws that a successful prey managed to exploit for survival. This hopefully gives me some breath in the struggle of proving that my research in emergent behaviours will lead to interesting results and ideas.

This work is currently under review in a journal publication… let’s hope for the best!

 
Leave a comment

Posted by on November 30, 2010 in Chaos, Discussion

 

Tags: , , ,

Robotic tales from the real world

During my PhD I worked on two “real world” projects, namely an indoor and an outdoor robotic museum tour guide. This is a short account of that experience, rather than a description of what I did (which the interested reader can find here).

Cicerobot

The first project looked like CMU Minerva. A robot guides the visitors around several exhibits, interacting with them a proposing a few tours. The main problem I had was dealing with people. It was a hard task to convince the museum employees that “no, this robot is not going to take your jobs, please do not destroy it”. The second problem was the invisible obstacles. The museum was full with glass panels, glass screens and several other objects that don’t reflect light very willingly. And the sonar is out of question as it is too noisy. Ah, yes, I forgot to mention two staircases! The third problem was the working condition: a laptop on my lap, on a plastic chair. No internet. And the air conditioning that went out for holidays during the Sicilian summer.

But the big moment came, as student from schools and the State TV came to record a session with the robot. And everything worked perfectly! The satisfaction of seeing the robot cheerfully negotiating (in)visible obstacles, kids and invaluable museum items is indescribable. And all with a single processor being kept at 99% usage, where a bit less would have meant localisation failing and the robot tipping off the stairs. Countless hours of sweating and bug-tracking spent with a wonderful team finally had been repaid.

Below is the video that recorded the event. Note the last frames showing me setting on the chair and being very worried.

Robotanic

Take Cicerobot, do a pit-stop to change the wheels and a few sensors, a new make-up and you’ve got Robotanic. With a difference: its museum was an outdoor Botanical Garden. And that’s a huge difference!

The environment was an area 100 meters long by 30 meters wide. The alleys were covered with sand and foliage making the odometry a pure random number generator. The GPS kept sending a position that was jumping from the navigable alleys to the far less friendly trees and bushes nearby. And the working condition made the above museum a heaven: sitting on stone benches, constantly under the attack of mosquitoes and paying attention to the sky, fearing that it would become too hot or rainy!

Again the big moment came, this time no TV but several people participating to a conference flocked to see the robot. At the very last-minute my supervisor noticed an unplugged cable hanging from the camera, and said “why don’t you plug it in?” I did, and I regret that. The firewire cable triggered an interrupt conflict that took the GPS out of work. It took me more than half an hour to find the problem, and I had lost the momentum. The rest of the demo went well, but the bad start black-clouded the whole event.

Below is the only video I could find of the robot in action. The video is a bit wobbling but I am proud of it.

Lessons learnt

First of all: Test^30 (that is test to the power of 30). Whenever you are doing something in the real world, test it as much as possible. Something will go wrong, but at least you are minimising the risk. Second: code freeze. If something is working, and you are sure it is working, put it in the fridge, and leave it there until the very last moment (or until it expires!). Third and most important: have fun. It is very frustrating when you fall from the ideal world of pure research to the real world of people expecting something from you. But the satisfaction of seeing something really working is far more stimulating than having a paper published!

=-=-=-=-=
Powered by Blogilo

 
1 Comment

Posted by on August 2, 2010 in Research, Results

 

Tags: , , ,