Saturday, April 16, 2016

AI ten years later

The 10 year anniversary of my interview on Artificial Intelligence (AI) with Adelyn Jones on Lightning 100 just passed.  I’m not really surprised that much of the interview is as relevant today as it was then, at least at the level of abstraction that we were talking. I think that some my attitudes have changed though -- I seem more open to some of the blue sky at the end of the interview today than I was then. Blue sky expectations are also what brings in so many new students to AI, who are then disappointed when AI is taught as a collection of disparate tools, rather than an integrative whole. I want to deliver integrated, blue sky AI to them -- I had the same expectation of AI as an undergraduate too! So part of my renewed openness is constrained wishful thinking. 

We didn’t plan the interview ahead of time, but editing is a great thing -- I remember helping with the edits after the interview -- the whole process was a revelation, which I remembered when I was Director of Vanderbilt Institute for Digital Learning. The interview audio recording is at  https://goo.gl/98oZHE .  

00:00 Cool music and Adelyn’s intro

00:45 What is AI? Exploring, evaluating, and acting on alternatives was my answer then, and I still think is definitional of intelligence, at least in part

01:35 What are AI applications? Medicine, surgery, military, space exploration, game playing, story telling were areas I highlighted. I would now stress environmental and energy applications.

02:30 How are AIs programmed? I talked about AI developers defining analogs to grammars by generalizing experience. This was an impromptu response that I nonetheless think is essentially true, at least if “grammars” is broadly construed. AI also includes other methodologies for which the “exploring alternatives” through “grammars” are less accurate descriptors of what happens.

03:30 Drawbacks of AI? The legal implications (e.g., can you hold an AI responsible for wrongful behavior?) were a concern then, in reference to medical diagnostic systems, for example. I just attended an AI and Law conference (http://watsonesq.org/) in which these same issues are discussed, albeit were not the focus of the conference.

04:35 What are you (me) working on? At the time I was working on machine learning for cancer informatics with Mary Edgerton, and military personal assistants, and I mentioned some other applications at Vanderbilt, including tutoring and story (cartoon) remixing. I went to the National Science Foundation not long after this interview, and it opened me up to a whole new set of possibilities, including AI storytelling and AI and sustainability -- being opened to all new possibilities comes with some pros and cons, but no regrets.

05:35 What is in its infancy? Self-driving cars as the infant that would become the adult of intelligent highways, was (and still is) a good example. Intelligent highways are still a long ways off though, in large part because of economics I think -- the technology is probably close to ready, but how long before large numbers of individuals can afford smart cars, and cities can afford to (and will) retrofit road networks and other infrastructure? A long time I think. I think thats the case with much of AI blue sky -- there are large disincentives for what doing is technically possible, or will be technically possible. That said, a lot more cars are “talking” to each other now than they were in 2006, broadcasting traffic conditions, and there are probably discernible smart herds of cars operating on dumb highways.

06:30 More on strategic choice of applications: Automation through AIs are attractive for support tasks, with across-domain recommender systems and personal assistants. And “support” for humans varies from diagnosis of ailments (in support of doctors) to “support” on routine tasks like vacuuming. 

08:19 How human-like can an AI become? We got into affective computing, particularly the mimicking, perceiving, and feeling of emotion. This was also a topic at the Law and AI conference, and affective computing is an area that I think has taken off in the last 10 years generally, though its introduction to AI certainly predates 2006 (e.g., http://affect.media.mit.edu/).

09:25 Are Phony emotions good? No way -- we have enough phoniness, and we don’t need machines to practice it -- it would have negative consequences for our perceptions of personhood, including disillusionment (http://www.vuse.vanderbilt.edu/~dfisher/ai.theology.html), but I think there are important caveats here.

10:18 Why is affective computing a goal? Among the caveats are that sensing emotion can be useful, when humans are in the loop. But here too, I am conflicted and there is otherwise plenty of nuance. For example, I saw PARO, the robotic seal (e.g., https://www.youtube.com/watch?v=2ZUn9qtG8ow) for hospice patients. The seal is phony, but it is still a positive presence for some -- and remember FERBY (http://www.radiolab.org/story/137469-furbidden-knowledge/)!

10:52 How is emotion perception programmed into a machine? We talked about machine learning of associations between emotions and facial expressions and physiological readings -- machine learning of this type (supervised, as clarified below) is still all the rage. I’m not as interested as I once was in the science of machine learning as it is currently practiced, but the applications of it are ubiquitous and often very interesting.

11:55 Clarification on Supervised learning of emotion recognition through sight and sound. We talked about the help window of a department of motor vehicles as example -- this was a fun exchange, with some of it undoubtedly edited out -- I was truly relaxed by this point in the interview.

13:25 How does the machine learn to respond in emotion charged interactions? A machine is good at “sitting there and taking it” :-)

14:50 What happens when we create a self-aware AI? What are its rights? This was right out of an earlier write up on the theological implications of artificial intelligence (http://www.vuse.vanderbilt.edu/~dfisher/ai.theology.html), something I may get more into one day.

16:00 Economic disincentives for creating genuinely feeling robots -- see 05:35.

16:19 Feeling robots as surrogates for humans in science fiction -- again, the AI and theology commentary at 14:50 is relevant here.

16:35 What are differences between AI hardware platforms and the human brain? (I was winging it, but I think I got the big picture right)

17:45 Many in AI are not interested in human intelligence, so much as they are interested in “alien intelligence” -- anything that gets job done (for the pragmatists) and/or a genuine curiosity with all the possible kinds of intelligences that can exist. Who is to say that a whale’s intelligence resembles ours in all or even most aspects?

18:28 Is downloading the mind a possibility? I think I would be more accepting of this possibility, which is blue sky for sure, but after 10 years, I am much more appreciative that the future is a big place. I don’t know what’s possible. And I want to build a planetary AI environmental consciousness -- almost as crazy.

Thanks for the memories, AJ!