In the previous post I introduced the Kolmogorov complexity and how to calculate it using a few lines of Python. Here I will show how to calculate the complexity of a robot behaviour.
The first thing is to define is what are we calculating the complexity of. A robot behaviour, according to classic robotics, is a mapping between input and outputs. Therefore it is natural to consider a robot as purely reactive and calculate the motor outputs it produces according to the sensory inputs it receives.
In the first example I considered a people following behaviour I wrote some time ago (it is shown in action in this video). It simply steers the robot towards a person with a constant speed of 0.4 m/s. The input is the distance and angle between a detected person and the robot, and the output are the robot linear and angular speeds. Below are the velocities as recorded during around 10 minutes of operations.
The string we want to know the complexity of is made by the inputs followed by the outputs, for each time step. It is not surprising that the complexity of this string is, according to the tiny program showed before, 0.2810415403274712. This number alone does not say much, as we are only calculating an approximation of the complexity (see the comments on the previous post here). So let’s see what happens with a random behaviour.
In the second example I had the robot wandering in an environment roughly 100 square meters for again 10 minutes. The trajectory is shown below.
You can see that the robot was bouncing off the walls in a random pattern. The inputs/outputs string is made by 10 laser readings plus again linear and angular speeds. And its complexity is…. 0.30834272829763248!! What?? Is it only slightly higher than the previous one? You might think that this method of calculating the complexity of a robot behaviour is flawed. But is not: the complexity of the (x,y) pairs showed above is 0.32940190211892578, therefore not as random as it might seem.
The explanation is that even what seems to be a random behaviour on the long run is a process that repeats itself. In other words, staring at a moving robot is not a brain engaging activity as it might seem. But we can quantify this! If we plot the complexity as a function of time, we obtain the following graph:
That is, the more we stare at the robot performing a simple task, the less interested we become. This relates to the question I was asking in this post: can we be surprised by a robot? I don’t have the answer yet, but in these two posts I provided the tool we could use to analyse what is the surprise effect, and how can we quantify it.
In the final part of this post (yes, it is not finished yet) I will illustrate a simple experiment on the Kolmogorov complexity. I took a recurrent neural network and trained it using genetic algorithms. The goal of GA was to promote networks whose output is complex. It is worth mentioning that I used the wonderful package PyEvolve. I wish there were more like it, especially for its code clarity and easy of use. The close-up on the time varying output of the winning network is below.
Worthless to say, the Kolmogorov complexity of the whole network output is 0.94423057694230572. Beware, although it may seem a random process, it is not. It is chaotic. Basically, I trained the neural network to misbehave!
Powered by Blogilo