Tuesday, November 6, 2012

zen of computing

WAY back in the day, when I was a college senior, I was a peer co-advisor for the computer science (CS) major at the University of California, Irvine (UCI). My partner’s name was also Doug – he was a great guy, a wise-guy gamer type, who complemented my lower-key inner-nerd well. Before subjecting our peers to our advice, UCI put us (and co-advisors of all majors) through mass orientation and training. There was one exercise where each pair of same-major co-advisors was put sitting down, back to back – one person in each pair was handed a strange geometric object, with many faces, different composite shapes on each face, and the other partner was given a sketch pad and pencil. The pair’s task was to transfer understanding of the object from one partner to the other, through verbal question and answers, so that the partner with the pad could later answer (post-training) pre-specified questions, but these questions were unknown a priori to either co-advisor.
The CS team was the only team to complete the task successfully – all correct answers – I don’t think that others were even close!! Yes, I felt good about that and still do :-) , but beyond pumping up my ego (and it did), it started me thinking about what was special about my chosen field of computer science – what kind of thinking and communication, even values, did CS promote that were somewhat unique?
As CEers, we (Doug and Doug) were trained to communicate with the world’s stupidest entity capable of response – we were trained to program computers, and that training gave us a sense of when assumptions were problematic, to spell things out, to reflect on what we said, to recognize where there was ambiguity and seek to disambiguate, and like lessons on communication. The peer-advisor training task described above was made for us, and as we were told afterward, this was a task to illustrate how easy miscommunication was. I had read ‘Zen and the Art of Motorcycle Maintenance’ a few years before as a freshman at UC Santa Cruz, and I naturally thought that there was a zen of Computer Programming (lower case ‘z’, if there is such a thing).
CS was and is a rich source of metaphor – I took its strange and beautiful theoretical abstractions, like the Pumping Lemma, as bare-bone illustrations of life lessons (e.g., that you could screw up repeatedly and still “complete a sentence” — perhaps more later). But the concrete activity of computer programming was rich in metaphor as well. One lesson I internalized from programming is that my thinking is almost always incomplete and/or wrong, if only in the small. When I program a computer I’ll only start writing code when I think that I have a sufficient understanding of the problem and the solution, but even so I’ll produce a sequence of programs that are incorrect in some way, until ideally, I reach the last one in the sequence, which I hope is (entirely) ‘correct’ (maybe I’ll post later on implications of agile design methodology). Any experienced programmer knows though that this ideal of perfect code is never achieved for complex programs. Despite the complexity of the computer programs I’ve written or participated as part of a team in writing, their complexity is trivial compared to the complexity of the world’s real challenges, where there is no doubt that my knowledge is always incomplete – my knee jerk reaction to a fix, in the real world or when programming, is pitifully myopic. I’ll save for another day my thoughts on this shortsightedness, and how it has worsened, even in the limited context of programming, and even in the face (because?) of more powerful technology. In any case, though intellectual humility may grow from programming (we can hope), so does tenacity to see a solution through. Humility and tenacity make for a powerful partnership, one that I think computer scientists can have something of a unique perspective on.
A cousin to my-thinking-is-always-imperfect, again learned from programming, is to strive to see something for what it is, not (prematurely) what I would have it be. I can’t tell you how many times I have looked at computer code I’ve written and missed, over and over again, a bug, because I was projecting what I thought the code “should” be doing rather than what the code actually instructed that the computer do. I’ve had students look at their code for hours trying to debug a program, show the code to someone else, who then spots problems almost immediately – a fresh set of eyes is so important sometimes, and in principle they need not be someone else’s eyes, but often I’m just in a rut and can’t find my way out alone.
Clearly I have the goal of taking the code to an ideal state, but it can’t happen until I see the code for what it actually is. This lesson and generalization from code to other built entities is something I knew as an undergraduate, but it was many years and with much help from others, who sometimes bore sledgehammers :-) , that I internalized this as a lesson about self – seeing who and what I really was as a prerequisite to productive change. Some of these helpers became dear friends.
I’m not sure whether this account seems manufactured – its not – I saw and see grand life lessons, stripped bare but recognizable, in computer programming and computer science generally – recognizing these lessons excited me back in the day, and still does. I hope that CS excites its new practitioners in like ways.

Friday, August 24, 2012

AIs, robots, and humans

In the opening lecture of intro AI (CS 260), I discussed some contrasts between what AIs and robots were capable of, having come a long way, but have a long way to go -- I used these videos to illustrate some points:

   Bigdog (http://www.youtube.com/watch?v=cNZPRsrwumQ); 
   Asimo (http://www.youtube.com/watch?v=Q3C5sc8b3xM); 
   Dr. J (http://www.youtube.com/watch?v=f7njB1T-Xjk); 
   Willie Mays (http://www.youtube.com/watch?v=7dK6zPbkFnE ); 

I'm not sure Asimo could get back up if it fell!!! 

There are differences in mental abilities too (e.g., categorization and pattern matching:  http://www.youtube.com/watch?v=eq-AHmD8xz0 ), but the cognitive prothesis view suggests that AI can be used to augment myopic and otherwise limited human reasoning (e.g., Dan Ariely http://www.youtube.com/watch?v=9X68dm92HVI ) to yield a powerful human/AI hybrid intelligence.