2016-08-14

IBM briefs the White House about artificial intelligence: fundamental questions & emotions

Fundamental questions in AI research, and the most important research gaps

(Version from: July 28, 2016)
And emotions! Psychological research shows that emotions are a very important component in human action regulation. One may now think of the ideal of an AI being not affected by any emotions. I am skeptical, this will work for true universal AI systems of the first generations. Such systems that are still elements of science fiction, despite the progress in AI research. My reasoning is simple: AI is currently modeled after the only intelligent beings we know - us humans. Of course animals have certain degrees of intelligence too, but when I look at the examples and visions given for AI entities - it seems the human level is the goal. Now we humans are terribly bad at logical reasoning in daily life, we pass most of our daily life challenges with the support of our 'gut feeling' as has been shown by Gerd Gigerenzer. So, if we take us as a model and leave half of the necessary ingredients away, we will fail!

Microsoft Tay, AE & the way to it.

The beginning of the year 2016 Microsoft experienced a PR disaster. Tay is a chatbot that was brought online on the 23'rd of March 2016 and was taken offline after less than 24h again.What happened? The vision of Microsoft was well intended and Tay was a promising step towards a new generations of chatterbots. Normally a chatterbot or chatbot is a more or less sophisticated assembly of scripts that react to certain keywords someone types in a chat conversations. Tay is a deep learning AI system that should have learned from the interactions, i.e. exchange of text messages via Twitter. It did indeed learn! The things a gullible dimwit (emotionally) learns, when you put him on a schoolyard and tell kids to come and teach him stuff. Thousands of enthusiastic teenagers around the world began to 'teach' Tay. Some of the highlights can be seen in an article on gizmodo.com. A clinical psychologist would have probably diagnosed Tay as sociopathic or even psychopathic, lacking empathy and common moral values.

How do we humans acquire 'common moral values'?

In the movie 'Kindergarden Cop' there is a pretty self explanatory scene. Hardened police officer John Kimble has to go undercover as a kindergarten teacher. When he is introduced the first time to his class a small boy says 'Girls have a vagina and boys have a penis.' All children laugh and John Kimble tries to control the situation by saying "Thanks for the tip!". What happened here, is called social learning and was described as a theory by the psychologist Albert Bandura. The model for the other children was the little boy who asked the delicate question, which is a running gag during the movie. The other children see now the reaction of the grown up to that question and realize this question is something special and funny - a conscious representation of the concepts 'penis', 'vagina', 'moral', 'shame' doesn't even need to be established here. It is sufficient for the other children to realize these words are emotionally loaded and create a tension.

Deep learning vs. semantic networks.

Trying to model the above example we would have little success using a semantic network, assuming the AI has the same quantity of words and concepts like a kindergarten child. Most of them don't know the definitions of the words 'penis' and 'vagina', but they realize trough social learning that something is 'funny' with these words, because they feel 'funny'. Here an approach utilizing deep learning would be promising. Training a system with enough 'funny' examples, witnessed on a model, would connect the feeling 'funny' with the words 'vagina' or 'penis' without the need for a semantic definition of these words.

But emotions are chaotic! How to model them?

That something is chaotic doesn't mean you can't model it, as chaos theory has shown in  the  1980's. Back then very simple computer programs could model deterministic chaos. Which means that the chaos created by the machine is based on a quite simple set of rules. The theory also teaches us that deterministic chaos is not to be confused with completely random events. Many forms of chaotic states tend to gravitate to an attractor, a relative stable system state - for the moment. So do emotions! Emotions can change seemingly chaotic, but on closer look there is a system in these changes and they are usually never completely random. Maybe emotions can be modeled  on sets of rules?

Professor Dietrich Dörner, the PSI Theory and Artificial Emotions (AE).

 The question how to model emotions in a computer program was exactly the challenge Professor Dietrich Dörner at the University of Bamberg faced. His research field was the human performance in complex situations modeled by computer simulations. So as early as in the 1970's Dietrich Dörner and his team began to write computer programs which interacted with humans. Today we would probably call them computer games. The only difference to computer games was that the interaction between human and program was recorded on a very detailed level. His research life showed that we humans generally perform terrible in complex situations. He researched which people perform good and what are the factors of good and bad performance. In the 1990's the necessity arose to simulate emotions in simulated entities to study the interaction between human and computer. So interestingly the goal was not to implement a general theory of Artificial Emotions, or AE - but to have a believable simulated entity on a computer screen.
This gave birth to the PSI-Theory which was implemented and simulates a small robot that explores and island to satisfy its needs for food, water and friendship. For me the funny fact here is that Dietrich Dörner managed to implement a pretty good model of emotions that works believable! Today one can implement this model on any smartphone. Joscha Bach was one of his students and evolved the PSI-Theory by creating MicroPsi. Version two is publicly available at GitHub.

So, does it mean AI systems will have feelings and a soul?
My opinion is, that these are philosophical questions. I believe we will have fully evolved general AI systems in some decades that seem to have feelings, dream of electric sheep and maybe even believe to have a soul. But since medicine, neuroscience and psychology have not yet answered the question about the nature of consciousness we simply have not empirical benchmark to prove above postulates. Until then, it has to be good enough to have working models of emotions.

To be continued...

No comments:

Post a Comment