RSS

Quantifying the wow effect of a robot — Part II

12 Jul

In the previous post I introduced the Kolmogorov complexity and how to calculate it using a few lines of Python. Here I will show how to calculate the complexity of a robot behaviour.

The first thing is to define is what are we calculating the complexity of. A robot behaviour, according to classic robotics, is a mapping between input and outputs. Therefore it is natural to consider a robot as purely reactive and calculate the motor outputs it produces according to the sensory inputs it receives.

In the first example I considered a people following behaviour I wrote some time ago (it is shown in action in this video). It simply steers the robot towards a person with a constant speed of 0.4 m/s. The input is the distance and angle between a detected person and the robot, and the output are the robot linear and angular speeds. Below are the velocities as recorded during around 10 minutes of operations.

The string we want to know the complexity of is made by the inputs followed by the outputs, for each time step. It is not surprising that the complexity of this string is, according to the tiny program showed before, 0.2810415403274712. This number alone does not say much, as we are only calculating an approximation of the complexity (see the comments on the previous post here). So let’s see what happens with a random behaviour.

In the second example I had the robot wandering in an environment roughly 100 square meters for again 10 minutes. The trajectory is shown below.

You can see that the robot was bouncing off the walls in a random pattern. The inputs/outputs string is made by 10 laser readings plus again linear and angular speeds. And its complexity is…. 0.30834272829763248!! What?? Is it only slightly higher than the previous one? You might think that this method of calculating the complexity of a robot behaviour is flawed. But is not: the complexity of the (x,y) pairs showed above is 0.32940190211892578, therefore not as random as it might seem.

The explanation is that even what seems to be a random behaviour on the long run is a process that repeats itself. In other words, staring at a moving robot is not a brain engaging activity as it might seem. But we can quantify this! If we plot the complexity as a function of time, we obtain the following graph:

That is, the more we stare at the robot performing a simple task, the less interested we become. This relates to the question I was asking in this post: can we be surprised by a robot? I don’t have the answer yet, but in these two posts I provided the tool we could use to analyse what is the surprise effect, and how can we quantify it.

In the final part of this post (yes, it is not finished yet) I will illustrate a simple experiment on the Kolmogorov complexity. I took a recurrent neural network and trained it using genetic algorithms. The goal of GA was to promote networks whose output is complex. It is worth mentioning that I used the wonderful package PyEvolve. I wish there were more like it, especially for its code clarity and easy of use. The close-up on the time varying output of the winning network is below.

Worthless to say, the Kolmogorov complexity of the whole network output is 0.94423057694230572. Beware, although it may seem a random process, it is not. It is chaotic. Basically, I trained the neural network to misbehave!

=-=-=-=-=
Powered by Blogilo

Advertisements
 
2 Comments

Posted by on July 12, 2010 in Chaos, Research, Tutorial

 

Tags: , , , ,

2 responses to “Quantifying the wow effect of a robot — Part II

  1. RobotGrrl

    July 14, 2010 at 11:54 am

    It’s interesting that you’re thinking that a “wow” effect can be done through pure input and output of movement based behaviours!

    Have you tried looking into this through the sociable robotics point of view?

    Keep up the interesting blog posts! ๐Ÿ™‚

     
    • Lorenzo Riano

      July 14, 2010 at 12:10 pm

      The wow effect, being it related to a human observer, should be measured by observing what the robot does. For example the robot may be sitting in the room doing string theory calculations, but from an external observer point of view it is doing nothing, so it is not interesting.

      I included the sensory inputs in the calculations because I wanted to calculate the complexity of the robot behaviour from the robot’s point of view. Having rich sensor inputs is a good starting point for a complex behaviour. However, what these graph show is that the robot perception is not rich. Even if it was, merely moving around is not complex after all!

      And the social aspects are probably among the main applications of these ideas ๐Ÿ™‚

       

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s