RSS

Category Archives: Research

Robotic tales from the real world

During my PhD I worked on two “real world” projects, namely an indoor and an outdoor robotic museum tour guide. This is a short account of that experience, rather than a description of what I did (which the interested reader can find here).

Cicerobot

The first project looked like CMU Minerva. A robot guides the visitors around several exhibits, interacting with them a proposing a few tours. The main problem I had was dealing with people. It was a hard task to convince the museum employees that “no, this robot is not going to take your jobs, please do not destroy it”. The second problem was the invisible obstacles. The museum was full with glass panels, glass screens and several other objects that don’t reflect light very willingly. And the sonar is out of question as it is too noisy. Ah, yes, I forgot to mention two staircases! The third problem was the working condition: a laptop on my lap, on a plastic chair. No internet. And the air conditioning that went out for holidays during the Sicilian summer.

But the big moment came, as student from schools and the State TV came to record a session with the robot. And everything worked perfectly! The satisfaction of seeing the robot cheerfully negotiating (in)visible obstacles, kids and invaluable museum items is indescribable. And all with a single processor being kept at 99% usage, where a bit less would have meant localisation failing and the robot tipping off the stairs. Countless hours of sweating and bug-tracking spent with a wonderful team finally had been repaid.

Below is the video that recorded the event. Note the last frames showing me setting on the chair and being very worried.

Robotanic

Take Cicerobot, do a pit-stop to change the wheels and a few sensors, a new make-up and you’ve got Robotanic. With a difference: its museum was an outdoor Botanical Garden. And that’s a huge difference!

The environment was an area 100 meters long by 30 meters wide. The alleys were covered with sand and foliage making the odometry a pure random number generator. The GPS kept sending a position that was jumping from the navigable alleys to the far less friendly trees and bushes nearby. And the working condition made the above museum a heaven: sitting on stone benches, constantly under the attack of mosquitoes and paying attention to the sky, fearing that it would become too hot or rainy!

Again the big moment came, this time no TV but several people participating to a conference flocked to see the robot. At the very last-minute my supervisor noticed an unplugged cable hanging from the camera, and said “why don’t you plug it in?” I did, and I regret that. The firewire cable triggered an interrupt conflict that took the GPS out of work. It took me more than half an hour to find the problem, and I had lost the momentum. The rest of the demo went well, but the bad start black-clouded the whole event.

Below is the only video I could find of the robot in action. The video is a bit wobbling but I am proud of it.

Lessons learnt

First of all: Test^30 (that is test to the power of 30). Whenever you are doing something in the real world, test it as much as possible. Something will go wrong, but at least you are minimising the risk. Second: code freeze. If something is working, and you are sure it is working, put it in the fridge, and leave it there until the very last moment (or until it expires!). Third and most important: have fun. It is very frustrating when you fall from the ideal world of pure research to the real world of people expecting something from you. But the satisfaction of seeing something really working is far more stimulating than having a paper published!

=-=-=-=-=
Powered by Blogilo

 
1 Comment

Posted by on August 2, 2010 in Research, Results

 

Tags: , , ,

Quantifying the wow effect of a robot — Part II

In the previous post I introduced the Kolmogorov complexity and how to calculate it using a few lines of Python. Here I will show how to calculate the complexity of a robot behaviour.

The first thing is to define is what are we calculating the complexity of. A robot behaviour, according to classic robotics, is a mapping between input and outputs. Therefore it is natural to consider a robot as purely reactive and calculate the motor outputs it produces according to the sensory inputs it receives.

In the first example I considered a people following behaviour I wrote some time ago (it is shown in action in this video). It simply steers the robot towards a person with a constant speed of 0.4 m/s. The input is the distance and angle between a detected person and the robot, and the output are the robot linear and angular speeds. Below are the velocities as recorded during around 10 minutes of operations.

The string we want to know the complexity of is made by the inputs followed by the outputs, for each time step. It is not surprising that the complexity of this string is, according to the tiny program showed before, 0.2810415403274712. This number alone does not say much, as we are only calculating an approximation of the complexity (see the comments on the previous post here). So let’s see what happens with a random behaviour.

In the second example I had the robot wandering in an environment roughly 100 square meters for again 10 minutes. The trajectory is shown below.

You can see that the robot was bouncing off the walls in a random pattern. The inputs/outputs string is made by 10 laser readings plus again linear and angular speeds. And its complexity is…. 0.30834272829763248!! What?? Is it only slightly higher than the previous one? You might think that this method of calculating the complexity of a robot behaviour is flawed. But is not: the complexity of the (x,y) pairs showed above is 0.32940190211892578, therefore not as random as it might seem.

The explanation is that even what seems to be a random behaviour on the long run is a process that repeats itself. In other words, staring at a moving robot is not a brain engaging activity as it might seem. But we can quantify this! If we plot the complexity as a function of time, we obtain the following graph:

That is, the more we stare at the robot performing a simple task, the less interested we become. This relates to the question I was asking in this post: can we be surprised by a robot? I don’t have the answer yet, but in these two posts I provided the tool we could use to analyse what is the surprise effect, and how can we quantify it.

In the final part of this post (yes, it is not finished yet) I will illustrate a simple experiment on the Kolmogorov complexity. I took a recurrent neural network and trained it using genetic algorithms. The goal of GA was to promote networks whose output is complex. It is worth mentioning that I used the wonderful package PyEvolve. I wish there were more like it, especially for its code clarity and easy of use. The close-up on the time varying output of the winning network is below.

Worthless to say, the Kolmogorov complexity of the whole network output is 0.94423057694230572. Beware, although it may seem a random process, it is not. It is chaotic. Basically, I trained the neural network to misbehave!

=-=-=-=-=
Powered by Blogilo

 
2 Comments

Posted by on July 12, 2010 in Chaos, Research, Tutorial

 

Tags: , , , ,

Quantifying the wow effect of a robot — Part I

[This is a two parts post, as I realised while writing it that it was growing without control.]

As a roboticist I spend most of my time dealing with robots. I program them, debug them, watch them perform some tasks. I might be surprised and thrilled by things people could judge as boring, or I might not get excited at all by robotic performances that others will claim as giant leaps towards a true AI. And, as I am a scientist too, I have a defect: sooner or later I will want to quantify my data. This includes the wow factor of a robot.

Imagine you are observing a robot performing a dance. At the beginning you will be caught by curiosity and you will not get your eyes off the robot. But after some time, you will notice that the robot is following a pattern. The dance movements will always be the same, no matter how much naturally random the programmer has tried to let them appear. After some time you have spent watching it, a robot will be no more interesting than a washing machine. Can we quantify this effect?

Computational theory comes to our help. In the sixties a quite clever Russian guy published a few papers about a theory which is as simple and elegant as it is powerful. After the name of the author, it is called Kolmogorov Complexity. Without digging into details, according to this theory the description length of a string in a given programming language is the complexity of that string. If the description is longer than the string itself, we’d better give up and use the string as its description.

For example, a huge pile of dishes might be a complex task for one not lucky enough to have a dishwasher (the best robot ever!), but from a computer point of view it is simply described by "2^326 dishes stacked in an uncomfortable equilibrium". On the other side the outcome of the matches of the World Cup is hardly described by an algorithm, and the list of these results is its best description. And this might explain why all of the Bayesian models and accurate simulations failed to predict the outcome of the World Cup, in spite of the efforts of the brightest minds in the world.

Obviously there are bad news: the Kolmogorov Complexity is not computable. That is, there will never be a program that receives as input an arbitrary string and gives as an output the complexity of that string. Before you leave the room shaking your head in despair, you might want to know that there is a trick: the Kolmogorov Complexity is strongly related to the entropy, which in turn is related to the compressibility of a string. I can see everybody rushing back to their seats with a light bulb shining over their heads: the Lempel-Ziv algorithm is the Swiss army knife of complexity.

To cut a long story short, the following magic lines in Python will produce a reasonable approximation to the Kolmogorov Complexity of a string s:

import zlib 
def kolmogorov(s):
  l = float(len(s))
  compr = zlib.compress(s)
  c = float(len(compr))
  return c/l 

This is called complexity per symbol, and it is between 0 and 1, where 1 means that the string is incompressible, or very complex. Being an approximation means that it works only for very big strings. Let’s see some examples.

The complexity of a string of several random numbers is high:

arr = rand(1000)
kolmogorov(arr.tostring())
  Out: 0.94303749999999997  

While it comes with no surprise that the complexity of a string of all zeros is low:

arr = zeros(1000)
kolmogorov(arr.tostring())
  Out: 0.00125  

An interesting result comes when we mix the two strings:

arr = hstack((rand(1000),zeros(1000)))
kolmogorov(arr.tostring())
  Out: 0.47612500000000002 

What about Gaussian numbers? If we use a high variance the result is not different from uniform numbers:

arr = randn(1000)
kolmogorov(arr.tostring())
  Out: 0.96350000000000002 

But we can see the complexity becoming lower when we shrink the Gaussian width:

arr = 0.5 + randn(1000)*0.0001
kolmogorov(arr.tostring())
  Out: 0.80937499999999996 

These are still random numbers, but the much smaller range makes the string of them far less complex. Below is a plot of the complexity as a function of the standard deviation. It can be seen that the smaller the width of the Gaussian, the lower the complexity of the string.

In the next post I will describe how to use the Kolmogorov Complexity to measure the wow effect of a robot behaviour.

Bookmark and Share

=-=-=-=-=
Powered by Blogilo

 
6 Comments

Posted by on July 9, 2010 in Chaos, Research, Tutorial

 

Tags: , , ,

Being surprised by a robot

Some time ago I stumbled over a podcast with an interview of Kristinn R. Thórisson. One part of the interview that stimulated my curiosity has been his (and many others) view that a robot behaviour, regardless of how smart it may be, looses its "wow" effect when it is known how this behaviour is generated. Let’s consider for example the Eliza program: a very simple rule-based program that pretended to be a psychologist. Well, people sometimes believed there was a real person behind it! Obviously, once they knew what was going on, in the best case a grin appeared on their face.

So, is it true that whenever we know the mechanisms behind cleverness, we stop judging it as clever? I wouldn’t say so for a biological system. No matter how much we know about even the simplest organism, it will never cease to surprise us and stimulate our will to understand it.

Let’s return to planet robots. Do we want smart machines? Of course yes! We all dream (and fear) about these androids doing jobs for us while we relax and drink from the cornucopia of laziness. But, do we want to be surprised by a robot?

The surprise factor in a robot is closely related to the concept of emergent behaviours. One will find several definitions of emergent behaviour, together with recipes about how to recognise one when you see it. I personally take the one written by Ronald and Sipper in their Robotics and Autonomous Systems paper entitled "Surprise versus unsurprise: Implications of emergence in robotics":

[emergence], where a system displays novel behaviours that escape, frustrate or sometimes, serendipitously, exceed the designer’s original intent.

I love this definition, not only for its clarity and descriptiveness, but also for its ironic style (I personally recommend reading other papers by the authors, as they are deep in contents and their writing style is amazing).

It looks like that, in order to be surprised by a robot, it must do something unexpected. And this unexpectedness (what a weird word) has to escape even the roboticist’s judgement axe. Wait, something is wrong here. Let’s take a random paper in robotics, and let’s look at the results section: you’ll see plenty of graphs, tables and discussions about how much the robot does what it’s expected to. Even better, there is a desperate race for the "zero mean" error that really reminds me of one of the Zeno’s paradoxes, as the error will never reach this zero goal.

In a nutshell, surprise is good, but the robot has to do what you want it to do. It needs a marking line between "obeying the orders" and "improvising". Where this line lays is strongly dependent on the application, and I think it will the subject of greater research when in the future robots will be really smart.

These are the questions that are mainly driving my research interests now. As it is written in my short bio, I am desperately trying to get the robots do something smart. And I believe that a way to obtain this is to relax the "zero error" requirements, and let ourself (the roboticists) be surprised by our own piece of code/hardware. While, obviously, being sure that the robot behaves.

I will conclude my first post with a quote from a colleague/friend of mine, who spent a long time discussing with me about these ideas, and who decided to write the introduction to a paper of mine:

Robots are tedious and boring. They are designed for specific tasks, which, after many hours of programming, swearing and beating our heads against the keyboard, they carry out autonomously. Nothing out of the ordinary can be expected from the pre-programmed robot as all that goes on is an almost mechanical following of program instructions, with any deviation from the expected behaviour considered as an error. As roboticists we aim for truly reliable, predicable robots that are free of unexpected behaviours — we don’t want our robotic car to unexpectedly drive us off the cliff or our vacuum cleaning robot to attack the hamster. However, this is mindnumbingly boring…

He never had a chance to finish this thought. And if you want, I will give you proper credits for this wonderful (yet unpublishable) remark.

Bookmark and Share

 
4 Comments

Posted by on June 13, 2010 in Discussion, Ideas, Research

 

Tags: , ,