RSS

Category Archives: Discussion

Motion planning with OpenRave: an IPython notebook

Base pose planning with OpenRave

Disclaimer:

This is my first attempt at a tutorial using IPython notebooks. This is made all too complicated by the fact that I am using OpenRave that launches an external process with th viewer. To create this post I used nbconvert then imported the resulting html in WordPress. The result is not very “bloggy” but it’s kind of cool. I am open to suggestions for how to embed an IPython notebook into WordPress. The same notebook can be viewed via nbviewer.

A useful tool in robotics is to find base poses from where an object can be reached. This is not an easy problem and it is the subject of ongoing research from groups around the world. Below is my simple take at it: if you know up something about the the world’s geometry (tables, objects, planes), then you can look for feasible solutions. Searching can be done in several different ways, but the approach I’m using here is as simple as effective: sample. This boils down to generating random solutions until one looks good. It seems crude but it does the job and it can be used as the starting point for more complex methods.

For those of you more into robotics, you will see an obvious parallel with Rapidly-exploring Random Trees (RRTs), or in general stochastic motion planning.

This code assumes you have downloaded the helping routines from my Github repository. You don’t need the whole lot (which is not much at the time of writing, but hopefully it will grow larger), just the files generate_reaching_poses navigation and utils.

Below is a video that shows the execution of exactly the same code as in this notebook, so you can see the effects without trying.

In [4]:
from IPython.display import YouTubeVideo
YouTubeVideo('o-sQ4nlPmVU', width=853, height=480)
Out [4]:

The first thing is to load the environment and the viewer. I am using a standard room provided by the OpenRave distribution.

In [1]:
import openravepy; 
import numpy as np
env = openravepy.Environment()
env.SetViewer('qtcoin')
env.Load('data/pr2test2.env.xml');

Then load my code and select the object to grasp and the surface where to place it.

In [5]:
%cd /home/pezzotto/Projects/OpenRaving #change this to the folder where you have placed your code.
import generate_reaching_poses
import utils
robot = env.GetRobots()[0]; manip = robot.SetActiveManipulator('rightarm')
obj = env.GetKinBody('TibitsBox1')
table = env.GetKinBody('Table1')
/home/pezzotto/Projects/OpenRaving

It’s always a good idea to fold the arms before moving.

In [6]:
utils.pr2_tuck_arm(robot)

Now come the first important part. First thing is to generate grasping positions for the gripper (this can be a long process). Then find a base position from where the gripper pose admits an IK solution, and which is collision free. Then plan a base trajectory to that position and executes it.

In [7]:
pose, sol, torso = generate_reaching_poses.get_collision_free_grasping_pose(robot, obj, 300, use_general_grasps=False)
import navigation; planner = navigation.SimpleNavigationPlanning(robot)
planner.performNavigationPlanning(pose, execute=True)
robot.GetController().Reset()

Once the robot has reached the base pose, lift the torso to the height found above. We are going to use motion planning to move the gripper in position, just to show some nice arm movements.

In [8]:
robot.SetTransform(pose)
robot.SetDOFValues([torso], [robot.GetJointIndex('torso_lift_joint')],)

Here motion planning kicks in. Give it some time.

In [9]:
mplanner = openravepy.interfaces.BaseManipulation(robot)
robot.SetActiveDOFs(manip.GetArmIndices()); 
mplanner.MoveActiveJoints(sol)
robot.WaitForController(0);

Ok, time to grab the object. We don’t deal with real grasping here (the objects are just bounding boxes anyway).

In [10]:
robot.GetController().Reset()
robot.Grab(obj)
utils.pr2_tuck_arm(robot)

We are almost done. The final step is to find a free spot on a table, move there and place the object down. Finding a free spot on a table is very similar to finding a grasping position, but this time instead of checking grasps we check reachability of a random point on a surface. After having found the object we just teleport the robot, we already checked that motion planning works.

In [11]:
pose, sol, torso = generate_reaching_poses.get_collision_free_surface_pose(robot, table, 100)
robot.SetTransform(pose)
robot.SetDOFValues(sol, robot.GetActiveManipulator().GetArmIndices())
robot.SetDOFValues([torso], [robot.GetJointIndex('torso_lift_joint')],)
robot.Release(obj)
utils.pr2_tuck_arm(robot)

That’s it! quite crude but effective!

Advertisements
 
Leave a comment

Posted by on November 2, 2012 in Discussion

 

A New Challenge for Robotics

It looks like yesterday when I had just started my PhD and I was looking with awe at the first DARPA Grand Challenge (2005). Seeing cars race in the desert with no driver, knowing that the world was being changed before my eyes. It wasn’t the development of particularly new technologies, but showing that research was out of the labs and into the field. It made history. It prompted me to focus more on real robotics. And now it is happening again.

In the past couple of years Boston Dynamics has shown the world that robots don’t necessarily need wheels, but they can walk on impervious terrain using four legs, or even two. But it is not only about making them stand. These robots have to do stuff using common tools, like driving a truck, closing a valve or using a drill. They won’t have the great stability four wheels provide, or the capability of carrying a heavy payload packed with sensors and computational power. The lack of precision in motion will have to be compensated with sensing. And a novel inclusion of a human operator in the loop.

Meet Atlas, the new guy that is going to change the way robots will work alongside humans.

Image

It won’t be alone, as seven teams are building their own hardware to compete for the first prize.

So what is this challenge about? I have been lucky enough to get a virtual seat at the kickoff meeting. The details are not final, but the main idea is that teams from all around the world will compete to create a robot that can be deployed in a disaster-stricken area, possibly inspired by Fukushima, to perform tasks too dangerous for humans. It is not about being a camera-on-wheels system, but a robot that can perform actions in a semi-supervised way. If this works the technology will change the way manufacturing is done (like Baxter, but in some other way), and it will create a new huge boost for robotics and its deployment in the real world.

There has been a lot of talking about the challenge that I am not going to repeat here. Here is what I think will be the main obstacles towards solving the challenge:

  • Perception: Identifying items that are either usable by the robot (tools, valves, trucks) or that are an obstacle towards a goal (rubble blocking a door).
  • Locomotion: Moving on an uneven terrain. Entering or exiting a vehicle. Climbing a ladder.
  • Communication: Performing actions with little or no supervision from the operator, given the communications constraints a disaster environment imposes.
  • Robustness: It’s not about having a perfect algorithm to solve a problem, but to be able to adapt and cope with environments and situations that in no way could have been foreseen and accounted for when programming the robot.
  • Integration: A lot of components and ideas will merge and fight to control the robot, and they will call for a right arbitration for the overall system to be functional.

Many more obstacles will need to be overcome. People will work days and nights to solve waves of problems. There will be last-minute rushes and hacky solutions. The end result might look like the one folks at Drexel University have nicely illustrated in the following video.

Good luck to all the teams, PIs, scientists and engineers competing to make the world a better place!

 
Leave a comment

Posted by on October 28, 2012 in Discussion, News

 

Tags: , ,

5/4 time, Jazz, David Brubeck and… Radiohead!

Disclaimer: I know I called this blog “Fantastic Machines”, and I promised I would talk about robotics, science, programming or related topics, but this is too mind-blowing to ignore.

Just by chance I stumbled upon this post by John Cook. Music and mathematics share a lot, and he does a good job at explaining the connections.

Here are two artists I love: Radiohead and Dave Brubeck. They play different music (although you could argue that Radiohead are inspired by jazz, see for example Pyramid Song). The latest album by Radiohead, which I personally don’t like, sounds more like dance than jazz. However somebody spotted some similarities between Take Five (Brubeck) and 15 Step (Radiohead). If you put them together you get the result below.

Crazy, isn’t it? By the way, you should listen to Brubeck’s Take Five alone, and Blue Rondo a la Turk, both in the great album Time Out.

You might be wondering what this has to do with robotics… Well Jazz and robotics are very well fit, as shown in the following video 😀 (the soundtrack is Deckchair, by Acoustic Ladyland).

 
Leave a comment

Posted by on October 19, 2012 in Discussion

 

Tags: ,

What Baxter Means for Research in Robotics

Short story: awesome! You can keep reading now if you want to know why I think so.

Today I was listening to an interview of Rodney Brooks speaking about Baxter. When I saw it featured on IEEE Spectrum I thought: “Cool, let’s see where it goes”. But listening to Brooks describing his creature gives you a different perspective.

Take a decades old task, like automatic assembly. Take a new technology like learning from demonstration. Then show the world that research can go out of the labs and change people’s life. Isn’t that easy?

No it isn’t. I haven’t seen Baxter in action but I bet there are a lot of hacks and assumptions that make it do a proper job. But that’s reasonable, even more, welcome. Most of the papers you’ll read in robotics start with a sentence along the line of:

We need robots  capable of learning from a non-expert to be usable in the real world.

And then it fires up equations, data collection, proofs and lab tests. However Rodney Brooks does something that he’s done in the past, actually he’s built his career around it: he does for real what others only discuss in papers and labs.

Don’t take me wrong, I’m not one more voice saying that research in University should be more application-focused and less theoretical. Baxter is build upon the research people in Universities around the world have done over the past years. Robotics, manipulation, computer vision, they all share the prize here.

This is a praise to all my colleagues who have worked hard and who never believed their research would make a difference. It takes a collective effort to change the world.

And a single mind who figures how to make money out of it.

 
3 Comments

Posted by on October 16, 2012 in Discussion, Results

 

Tags: , ,

The Complex Dance of a Prey

During my research on Emergent Behaviours I’ve been often questioned about the usefulness of something that you can’t control, you can’t predict and seems to be totally random. I’m not going through these points here, I’ll let the future talk about the past.

However, scientifically speaking, I need to prove a few points. In particular I was looking for answers to two questions:

  • Can performance improve when allowing emergent behaviours?
  • Is an emergent behaviour an expression of randomness?

To answer these questions I set-up a Predator-Prey scenario, where the predator had all the advantages, namely

  • It is faster than the prey
  • It is programmed by me

Armed with the python library PyEvolve I developed several preys without success. This went on until I decided to blend complexity and goal-pursuing. In other words I trained two recurrent neural networks, one to be complex and the other one to avoid the predator. A third network mixes the twos. The resulting prey speaks by itself:

This proved that complexity, the fertile ground for emergent behaviours, can improve the performance of a goal oriented system.

To answer the second question, I replaced the complex module with a random number generator. The result was awful: the prey could not survive for more than 40s !

Complexity (according to Kolmogorov, is not random. The chaotic processes have laws that a successful prey managed to exploit for survival. This hopefully gives me some breath in the struggle of proving that my research in emergent behaviours will lead to interesting results and ideas.

This work is currently under review in a journal publication… let’s hope for the best!

 
Leave a comment

Posted by on November 30, 2010 in Chaos, Discussion

 

Tags: , , ,

Being surprised by a robot

Some time ago I stumbled over a podcast with an interview of Kristinn R. Thórisson. One part of the interview that stimulated my curiosity has been his (and many others) view that a robot behaviour, regardless of how smart it may be, looses its "wow" effect when it is known how this behaviour is generated. Let’s consider for example the Eliza program: a very simple rule-based program that pretended to be a psychologist. Well, people sometimes believed there was a real person behind it! Obviously, once they knew what was going on, in the best case a grin appeared on their face.

So, is it true that whenever we know the mechanisms behind cleverness, we stop judging it as clever? I wouldn’t say so for a biological system. No matter how much we know about even the simplest organism, it will never cease to surprise us and stimulate our will to understand it.

Let’s return to planet robots. Do we want smart machines? Of course yes! We all dream (and fear) about these androids doing jobs for us while we relax and drink from the cornucopia of laziness. But, do we want to be surprised by a robot?

The surprise factor in a robot is closely related to the concept of emergent behaviours. One will find several definitions of emergent behaviour, together with recipes about how to recognise one when you see it. I personally take the one written by Ronald and Sipper in their Robotics and Autonomous Systems paper entitled "Surprise versus unsurprise: Implications of emergence in robotics":

[emergence], where a system displays novel behaviours that escape, frustrate or sometimes, serendipitously, exceed the designer’s original intent.

I love this definition, not only for its clarity and descriptiveness, but also for its ironic style (I personally recommend reading other papers by the authors, as they are deep in contents and their writing style is amazing).

It looks like that, in order to be surprised by a robot, it must do something unexpected. And this unexpectedness (what a weird word) has to escape even the roboticist’s judgement axe. Wait, something is wrong here. Let’s take a random paper in robotics, and let’s look at the results section: you’ll see plenty of graphs, tables and discussions about how much the robot does what it’s expected to. Even better, there is a desperate race for the "zero mean" error that really reminds me of one of the Zeno’s paradoxes, as the error will never reach this zero goal.

In a nutshell, surprise is good, but the robot has to do what you want it to do. It needs a marking line between "obeying the orders" and "improvising". Where this line lays is strongly dependent on the application, and I think it will the subject of greater research when in the future robots will be really smart.

These are the questions that are mainly driving my research interests now. As it is written in my short bio, I am desperately trying to get the robots do something smart. And I believe that a way to obtain this is to relax the "zero error" requirements, and let ourself (the roboticists) be surprised by our own piece of code/hardware. While, obviously, being sure that the robot behaves.

I will conclude my first post with a quote from a colleague/friend of mine, who spent a long time discussing with me about these ideas, and who decided to write the introduction to a paper of mine:

Robots are tedious and boring. They are designed for specific tasks, which, after many hours of programming, swearing and beating our heads against the keyboard, they carry out autonomously. Nothing out of the ordinary can be expected from the pre-programmed robot as all that goes on is an almost mechanical following of program instructions, with any deviation from the expected behaviour considered as an error. As roboticists we aim for truly reliable, predicable robots that are free of unexpected behaviours — we don’t want our robotic car to unexpectedly drive us off the cliff or our vacuum cleaning robot to attack the hamster. However, this is mindnumbingly boring…

He never had a chance to finish this thought. And if you want, I will give you proper credits for this wonderful (yet unpublishable) remark.

Bookmark and Share

 
4 Comments

Posted by on June 13, 2010 in Discussion, Ideas, Research

 

Tags: , ,