Magazine, Vol. 3: Digital Transformation
Leave a comment

“Intelligence as such is not interested in the concept of domination”

0 1 #9 © Max Dauven

The Friendly Robot Next Door: An Optimistic View of Artificial Intelligence


Should we be wary of Artificial Intelligence? In this interview with 42 Magazine, Professor Robert Trappl shares his understandings of the interplay of rationality and emotions, a possible cohabit of humans and machines, and provides some reasons for optimism.

Prof. Dr Trappl – Are there any misconceptions about Artificial Intelligence that you would like to clear up?

I would like to clarify the common misconception that the further development of Artificial Intelligence comes along with the risk of a superintelligence, an intelligence which accedes to world domination and enslaves human kind or eliminates humans entirely. I do not believe this, despite the fact that this position is represented by people like Elon Musik or Stephen Hawking, who are by all means competent in other domains.

Why do you have doubts about the notion of a hostile superintelligence?

The term superintelligence awards intelligence with an uncharacteristic feature, namely intention. First of all, intelligence describes the effort to solve problems and to survive in an often unpredictable environment. Motives and associated ambitions – of course adapted to the respective living environment – emanate from us humans, from a different system. Intelligence as such, however, is not interested in the concept of domination. Nevertheless, at the moment we are working on the modelling of personalities at the OFAI, the Austrian Research Institute for Artificial Intelligence. I do not want to entirely rule out that such a simulation of personalities might some day become problematic for human kind.


“First of all, intelligence describes the effort to solve problems and to survive in an often unpredictable environment”


How does the relationship between humans and machines evolve alongside the increasing embodiment of Artificial Intelligence?

Let me give you an example: David Levy’s book “Love and Sex with Robots”, sketches a future in which it is possible to get married to robots. Because I was inquisitive I studied the Austrian marriage law. According to the law, robots were in no way excluded from becoming a marriage partner. However, the marriage law requires both partners to be at least sixteen years of age. And who wants to marry a sixteen-year-old robot? Another book, published in the 1980s, which discusses interesting approaches to this topic is “The Intimate Machine” by Neil Frude.

According to this book, robots need to be unpredictable to a certain degree in order to remain of interest, which means they need to have the human factor, so to say. The algorithms of online dating agencies are based on a similar principle, because they search for partners as similar as possible, whose small differences illustrate a particular charm. The adventure of meeting someone by accident is replaced by a systematic partner choice based on similarities. From there the ability to arbitrate between humans and robots is not too far away.

The age of Artificial Intelligence has often been proclaimed in the past, but has not occurred thus far. What is different this time?

By now, the necessary steps in technological development have been made, something which humans now come to recognise. The defeat of Gari Kasparow by the IBM-Computer Deep Blue in a game of chess in 1997 was predictable, because the development of computing power followed Moore’s Law, and hence always doubled in a time span of 12 to 24 months. The defeat of the world’s best GO-player Lee Sedol by the program “Alpha GO” has however baffled experts. In this instance, the principles of “Deep Learning” have been used effectively in public for the first time. Deep Learning is a subsection of Machine Learning and combines large quantities of data with those structures of neuronal networks we know from the human brain. Through this network structure, the system is able to link the newly learnt information to new contents like an algorithm, and in this way, it can autonomously enhance itself. Even though these approaches had already been employed in the 70s and 80s, they could not perform to their full potential, because computers back then were not fast enough and had neither sufficient memory nor a connection to the internet.

The possibilities of Deep Learning lead people to think: “this poses a real danger”, because machines and algorithms can effectively replace manpower in the long run. And indeed, many jobs become superfluous – interestingly enough not so much on the part of workers and craftspeople, as was expected, but more on the part of office staff. When you enter a bank nowadays, almost no one is sitting inside. Those job positions simply do not exist anymore. That is of course weird. Forecasts like the one of the management consultancy McKinsey International expect that about 40-50 per cent of all jobs could possibly be replaced due to the digital shift.

Admittedly, the substance of such forecasts can fluctuate strongly over time. In its entirety, what you have just described thus puts emphasis on the fact that the digital transformation initiates a change in the working world.

Absolutely, but everyone agrees that something disruptive is about to happen. The evolution of digital transformation, meaning the increased computing power and the extent of networking, are all developments which have made the topic of Artificial Intelligence tangible.

In terms of Deep Learning: Alan Turing had already developed the idea of the „Child Machine“: a basic algorithm capable of learning autonomously. Does this mean that prospective AI will undergo a process of growth, similar to that of a child?

At present we are working on two projects which investigate how children learn to speak by interacting with their environment. These projects contribute a great deal to our research about how one can teach a robot to do certain kinds of work, such as fixing a car engine. In order for this to work, we need to establish a shared vocabulary, so that humans and machines can effectively interact with each other. This vocabulary can then be expanded, for example by presenting a robot with things while saying: “This is a tube, as opposed to a pipe”. Like this, a robot is able to understand relations and apply instructions, similar to an apprentice or a child. It is certainly a more complex task than one might think, but it is possible.

The development of Augmented Personality or Augmented Reality can already be clearly felt, for my smartphone is nothing else than an extension of myself, only that the interface of a finger on a touchscreen is still relatively simple.

Yes, that is right, but the technology is constantly being improved. We are, however, far from what we see in science fiction films. The notion that we can download our self in the form of a brain-program in the future is in a certain sense an illusion, because in our brain, “Hardware” and “Software” are inseparable, they are sometimes even called “Wetware”. Hence, you cannot download anything from there. This stands in contrast to a computer, of which we know: when I have this or that composition of the hardware and this kind of operating system, I can load any desired program onto it. We as humans do not have such things. This vision does not seem to be attainable at the moment, but who knows what it will be like in 50 years.


“I believe we have made a big mistake in AI-research by treating intelligence as rational processing while neglecting emotions”


Now, research has shown that we also need emotional intelligence in order to make successful decisions. What are your thoughts on this?

I believe we have made a big mistake in AI-research by treating intelligence as rational processing while neglecting emotions. Since at least the mid-1990s, we have known that less emotional people tend to have problems making rational decisions. The original idea of viewing rationality and emotionality as polar opposites is therefore wrong, since they mutually define each other for a variety of reasons.

Not only in the field of communication, where interpersonal relations are incredibly important, but also in recalling memory contents, especially when the contents are emotionally charged, in which case the most emotionally charged will be preferably retrieved from episodic memory. Hence, when we think of Artificial Intelligence we cannot get past emotionality. As of yet, computers most likely do not have emotions. But what they are quite good at is to recognise, process and express emotions. This is comparable to an actor, who does not necessarily feel the emotions he displays outwardly. He acts and simulates, and this is something that computers, robots, Synthetic Actors are increasingly good at.

Is it hence possible that relationships and understanding as between humans will similarly develop between humans and machines?

To be honest, I do not know. But I believe that a lot is possible in this regard. For humans already love objects which do not have any kind of intelligence – for example simulated animal life. It started with “Paro”, the robot-seal, which was followed by “Aibo”, the robot-dog. The dog, however, rather constitutes a counter-example. Robots made of metal or plastic cannot be touched the same pleasant way that fur can be touched, not even fake fur. I cannot rule out that there will some day be a robot that has such a thing as a sense of self.

There is already the notion that robots have some form of awareness, like a predecessor of consciousness, because they know where they are and where they should go. Thus, they command a representation of their surroundings, or else they would constantly bump into something. However, they most likely do not command an actual consciousness yet. But perhaps this will happen some day. Maybe robots will need something like a consciousness in the future because they won’t be able to do certain tasks any other way. Far more than 90 per cent of our actions happen subconsciously, but we have learnt most of them once before. In that moment, we consciously saved them, and robots may function in the same way.


We are so enthusiastic in seeing only the dangers!“


Are there cultural differences in the way robots and Artificial Intelligence are accepted or rather dealt with?

Yes, there are. Germans – as well as Austrians – are globally the most sceptical about technologies due to the high technology that has been used during the Second World War in the concentration camps to murder human beings. When renowned scientists, for example from the field of AI, come to Germany, the dangers and horrors associated with technology are oftentimes exclusively dealt with. We are so enthusiastic in seeing only the dangers! This stands in stark contrast to other nations, which tend to view technology in a more positive light, and for example say about cars: “Sure, humans have died from them, but cars have also significantly increased the degree of mobility.” Germany is at present caught in a way of thinking which I cannot comprehend. I would like to tell you quite clearly that Germany and Austria run into the danger of falling behind technologically. In Japan, for instance, the relationship to technology is completely different.

Over there, robots used for nursing care are much more accepted because it is embarrassing to show weakness. To lie in the hospital bed and be taken care of by a robot is hence much less unpleasant for the patient than to be taken care of by a real person. Over here one would say: “The health insurance wants to save money and therefore sends a robot to take care of me, whereas I would have liked to be taken care of by Miss XY, with whom I am able to talk about the weather.” In this situation, nobody is right or wrong, it is simply a matter of cultural differences in dealing with technology. I was surprised that one of the most well-known Japanese researchers of robots has written a book with the title “The Buddha in the Robot”.

If someone from the Catholic Church talked of “Christ as a Robot”, it would be seen as pure blasphemy. I recently had fun giving a lecture about the “Robot Deus” as a reaction to Harari’s book “Homo Deus”. Everyone says that the robot is about to become the Lord of the World, so we should develop a theology of the prospective robot-god early enough. This idea was already practically implemented in Silicon Valley, where Anthony Levandowski has founded the new church “Way of the Future”, which positions Artificial Intelligence at the centre of its religious practices.

You are obviously not afraid of a future in which Artificial Intelligence plays an increasingly important role in society. Where does your optimism come from?

My personal history certainly is one reason. I was born in 1939 and have hence consciously experienced the Second World War as well as the reconstruction. Our lives have improved enormously through technological advances; they have brought incredible benefits. We live longer and under better conditions, we do not have to freeze anymore in the winter, we work less and always carry the Encyclopedia Britannica with us in our smartphone. This does not mean that each technical innovation should be welcomed in a completely unbiased way. One should also be sceptical and critically question the effects of new technologies, something that becomes clear when looking at the phenomenon of Fake News. But nowadays a majority of people in Central Europe already live under paradisiac circumstances. The people who do not recognise this, I am afraid, do not have historical awareness.

What do you think poses more of a threat to our future, humans or Artificial Intelligence?

Since the election of Donald Trump I am pretty sure that humans pose more of a threat. In my opinion, Artificial Intelligence does not pose a threat at the present time.

What role can Artificial Intelligence play in the positive development of our society?

In the ideal case we all work even less and enjoy life even more. I can definitely imagine that that which creates personal and physical connections will gain greater importance, when we have more time and are able to use this time positively. My prognosis is: More free time, more independence, more focus on the emotional, on the artistic, on personality and literature. I think art and creativeness will have an even greater importance than they do now. This opens up infinite possibilities for us. Personally, I find it desirable to be exempted from drudgery, if this does not lead one to suffer financial losses.

What further steps are necessary in order to realise this potential with regard to Artificial Intelligence and robotics?

I think the pejorative manner in which mathematics and informatics are sometimes talked about needs to be changed. What we need is a change of mindset to be able to see the beauty and the aesthetics of abstract things. A mathematical formula, for instance, can be perceived as beautiful and intuitively correct without knowing its derivation. Of course one should not create a formula only with regard to aesthetic perceptions, but the result has the potential to fascinate people. In the same way, an algorithm can potentially fascinate people. We need to enable people to experience the aesthetics inherent to the MINT subjects – mathematics, informatics, natural sciences and technology. I would imagine quality education to be headed in this direction. Because the fact that interaction so far consists of swiping with a finger over the surface of a glass plate is eventually a tragedy.

What we thus need is a stronger appreciation of the technological, but at the same time, we cannot lose appreciation for the non-technological?

Absolutely. One needs to come to value both, and above all, one should not place them in competition with each other. But especially the German-speaking realm with its many rules and provisions will provide a lot of room to the fear of technologies – exactly like the European Union. However, the EU actually had an interesting work group under the leadership of Mady Delvaux, which developed the proposition that intelligent computers could have something like a legal personality.

That sounds like you wish for more precise legal frameworks in order to enable a natural handling of Artificial Intelligence.

Yes, but above all it enables the legal autonomy of intelligent computers. This could mean that one cannot simply pull the plug anymore. But in case of misconduct, for example in a car accident, it could have direct consequences for the computer in question.

Does this mean that we will assign responsibility to Artificial Intelligence?

Yes, of course, we will even have to. In earlier times, the notion that an imaginary construct such as a business could take on responsibility was perceived as absurd. Today, it is a self-evident aspect of our economic system and our society. Needless to say, mistakes will oftentimes still be able to be traced back to humans. Nevertheless, we need the concept of self-responsibility for a natural handling with autonomously operating artefacts which is integrated in our societal everyday life.

Interview: Kurt Bille

Translation: Leonie Dieske



Robert Trappl is the director of the Austrian Research Institute for Artificial Intelligence and a European pioneer in the fields of cybernetics and Artificial Intelligence. A central focus of his work is exploring the role that emotions play in the development of Artificial intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.