RSS

Monthly Archives: July 2010

Quantifying the wow effect of a robot — Part II

In the previous post I introduced the Kolmogorov complexity and how to calculate it using a few lines of Python. Here I will show how to calculate the complexity of a robot behaviour.

The first thing is to define is what are we calculating the complexity of. A robot behaviour, according to classic robotics, is a mapping between input and outputs. Therefore it is natural to consider a robot as purely reactive and calculate the motor outputs it produces according to the sensory inputs it receives.

In the first example I considered a people following behaviour I wrote some time ago (it is shown in action in this video). It simply steers the robot towards a person with a constant speed of 0.4 m/s. The input is the distance and angle between a detected person and the robot, and the output are the robot linear and angular speeds. Below are the velocities as recorded during around 10 minutes of operations.

The string we want to know the complexity of is made by the inputs followed by the outputs, for each time step. It is not surprising that the complexity of this string is, according to the tiny program showed before, 0.2810415403274712. This number alone does not say much, as we are only calculating an approximation of the complexity (see the comments on the previous post here). So let’s see what happens with a random behaviour.

In the second example I had the robot wandering in an environment roughly 100 square meters for again 10 minutes. The trajectory is shown below.

You can see that the robot was bouncing off the walls in a random pattern. The inputs/outputs string is made by 10 laser readings plus again linear and angular speeds. And its complexity is…. 0.30834272829763248!! What?? Is it only slightly higher than the previous one? You might think that this method of calculating the complexity of a robot behaviour is flawed. But is not: the complexity of the (x,y) pairs showed above is 0.32940190211892578, therefore not as random as it might seem.

The explanation is that even what seems to be a random behaviour on the long run is a process that repeats itself. In other words, staring at a moving robot is not a brain engaging activity as it might seem. But we can quantify this! If we plot the complexity as a function of time, we obtain the following graph:

That is, the more we stare at the robot performing a simple task, the less interested we become. This relates to the question I was asking in this post: can we be surprised by a robot? I don’t have the answer yet, but in these two posts I provided the tool we could use to analyse what is the surprise effect, and how can we quantify it.

In the final part of this post (yes, it is not finished yet) I will illustrate a simple experiment on the Kolmogorov complexity. I took a recurrent neural network and trained it using genetic algorithms. The goal of GA was to promote networks whose output is complex. It is worth mentioning that I used the wonderful package PyEvolve. I wish there were more like it, especially for its code clarity and easy of use. The close-up on the time varying output of the winning network is below.

Worthless to say, the Kolmogorov complexity of the whole network output is 0.94423057694230572. Beware, although it may seem a random process, it is not. It is chaotic. Basically, I trained the neural network to misbehave!

=-=-=-=-=
Powered by Blogilo

 
2 Comments

Posted by on July 12, 2010 in Chaos, Research, Tutorial

 

Tags: , , , ,

Quantifying the wow effect of a robot — Part I

[This is a two parts post, as I realised while writing it that it was growing without control.]

As a roboticist I spend most of my time dealing with robots. I program them, debug them, watch them perform some tasks. I might be surprised and thrilled by things people could judge as boring, or I might not get excited at all by robotic performances that others will claim as giant leaps towards a true AI. And, as I am a scientist too, I have a defect: sooner or later I will want to quantify my data. This includes the wow factor of a robot.

Imagine you are observing a robot performing a dance. At the beginning you will be caught by curiosity and you will not get your eyes off the robot. But after some time, you will notice that the robot is following a pattern. The dance movements will always be the same, no matter how much naturally random the programmer has tried to let them appear. After some time you have spent watching it, a robot will be no more interesting than a washing machine. Can we quantify this effect?

Computational theory comes to our help. In the sixties a quite clever Russian guy published a few papers about a theory which is as simple and elegant as it is powerful. After the name of the author, it is called Kolmogorov Complexity. Without digging into details, according to this theory the description length of a string in a given programming language is the complexity of that string. If the description is longer than the string itself, we’d better give up and use the string as its description.

For example, a huge pile of dishes might be a complex task for one not lucky enough to have a dishwasher (the best robot ever!), but from a computer point of view it is simply described by "2^326 dishes stacked in an uncomfortable equilibrium". On the other side the outcome of the matches of the World Cup is hardly described by an algorithm, and the list of these results is its best description. And this might explain why all of the Bayesian models and accurate simulations failed to predict the outcome of the World Cup, in spite of the efforts of the brightest minds in the world.

Obviously there are bad news: the Kolmogorov Complexity is not computable. That is, there will never be a program that receives as input an arbitrary string and gives as an output the complexity of that string. Before you leave the room shaking your head in despair, you might want to know that there is a trick: the Kolmogorov Complexity is strongly related to the entropy, which in turn is related to the compressibility of a string. I can see everybody rushing back to their seats with a light bulb shining over their heads: the Lempel-Ziv algorithm is the Swiss army knife of complexity.

To cut a long story short, the following magic lines in Python will produce a reasonable approximation to the Kolmogorov Complexity of a string s:

import zlib 
def kolmogorov(s):
  l = float(len(s))
  compr = zlib.compress(s)
  c = float(len(compr))
  return c/l 

This is called complexity per symbol, and it is between 0 and 1, where 1 means that the string is incompressible, or very complex. Being an approximation means that it works only for very big strings. Let’s see some examples.

The complexity of a string of several random numbers is high:

arr = rand(1000)
kolmogorov(arr.tostring())
  Out: 0.94303749999999997  

While it comes with no surprise that the complexity of a string of all zeros is low:

arr = zeros(1000)
kolmogorov(arr.tostring())
  Out: 0.00125  

An interesting result comes when we mix the two strings:

arr = hstack((rand(1000),zeros(1000)))
kolmogorov(arr.tostring())
  Out: 0.47612500000000002 

What about Gaussian numbers? If we use a high variance the result is not different from uniform numbers:

arr = randn(1000)
kolmogorov(arr.tostring())
  Out: 0.96350000000000002 

But we can see the complexity becoming lower when we shrink the Gaussian width:

arr = 0.5 + randn(1000)*0.0001
kolmogorov(arr.tostring())
  Out: 0.80937499999999996 

These are still random numbers, but the much smaller range makes the string of them far less complex. Below is a plot of the complexity as a function of the standard deviation. It can be seen that the smaller the width of the Gaussian, the lower the complexity of the string.

In the next post I will describe how to use the Kolmogorov Complexity to measure the wow effect of a robot behaviour.

Bookmark and Share

=-=-=-=-=
Powered by Blogilo

 
6 Comments

Posted by on July 9, 2010 in Chaos, Research, Tutorial

 

Tags: , , ,