Some time ago I stumbled over a podcast with an interview of Kristinn R. Thórisson. One part of the interview that stimulated my curiosity has been his (and many others) view that a robot behaviour, regardless of how smart it may be, looses its "wow" effect when it is known how this behaviour is generated. Let’s consider for example the Eliza program: a very simple rule-based program that pretended to be a psychologist. Well, people sometimes believed there was a real person behind it! Obviously, once they knew what was going on, in the best case a grin appeared on their face.
So, is it true that whenever we know the mechanisms behind cleverness, we stop judging it as clever? I wouldn’t say so for a biological system. No matter how much we know about even the simplest organism, it will never cease to surprise us and stimulate our will to understand it.
Let’s return to planet robots. Do we want smart machines? Of course yes! We all dream (and fear) about these androids doing jobs for us while we relax and drink from the cornucopia of laziness. But, do we want to be surprised by a robot?
The surprise factor in a robot is closely related to the concept of emergent behaviours. One will find several definitions of emergent behaviour, together with recipes about how to recognise one when you see it. I personally take the one written by Ronald and Sipper in their Robotics and Autonomous Systems paper entitled "Surprise versus unsurprise: Implications of emergence in robotics":
[emergence], where a system displays novel behaviours that escape, frustrate or sometimes, serendipitously, exceed the designer’s original intent.
I love this definition, not only for its clarity and descriptiveness, but also for its ironic style (I personally recommend reading other papers by the authors, as they are deep in contents and their writing style is amazing).
It looks like that, in order to be surprised by a robot, it must do something unexpected. And this unexpectedness (what a weird word) has to escape even the roboticist’s judgement axe. Wait, something is wrong here. Let’s take a random paper in robotics, and let’s look at the results section: you’ll see plenty of graphs, tables and discussions about how much the robot does what it’s expected to. Even better, there is a desperate race for the "zero mean" error that really reminds me of one of the Zeno’s paradoxes, as the error will never reach this zero goal.
In a nutshell, surprise is good, but the robot has to do what you want it to do. It needs a marking line between "obeying the orders" and "improvising". Where this line lays is strongly dependent on the application, and I think it will the subject of greater research when in the future robots will be really smart.
These are the questions that are mainly driving my research interests now. As it is written in my short bio, I am desperately trying to get the robots do something smart. And I believe that a way to obtain this is to relax the "zero error" requirements, and let ourself (the roboticists) be surprised by our own piece of code/hardware. While, obviously, being sure that the robot behaves.
I will conclude my first post with a quote from a colleague/friend of mine, who spent a long time discussing with me about these ideas, and who decided to write the introduction to a paper of mine:
Robots are tedious and boring. They are designed for specific tasks, which, after many hours of programming, swearing and beating our heads against the keyboard, they carry out autonomously. Nothing out of the ordinary can be expected from the pre-programmed robot as all that goes on is an almost mechanical following of program instructions, with any deviation from the expected behaviour considered as an error. As roboticists we aim for truly reliable, predicable robots that are free of unexpected behaviours — we don’t want our robotic car to unexpectedly drive us off the cliff or our vacuum cleaning robot to attack the hamster. However, this is mindnumbingly boring…
He never had a chance to finish this thought. And if you want, I will give you proper credits for this wonderful (yet unpublishable) remark.