I initially blamed age for a sudden increase in the number of "typos" that were actually correctly spelled words, but out of context. Then I started suspecting that stupid (or rather annoying!) word completion algorithms was responsible, and recently I've started catching it. My favorite artificial intelligence (AI) gaffs are
"wild west" ended up as "mild west" (the 'w' in "wild" changed to 'm' mid-way through the phrase though I didn't correct it until I was done ripping off text)
"dinner party" ==> "sinner party"
"cutting and pasting" ==> "cussing and pasting"
These mistakes lead to funny results -- in most cases the auto-correction on various platforms is just plain screwy, and in other cases of course it works well, but these are generally cases when I've completed the word, its flagged as an incorrect spelling with red underline and there is a change. It seems like some software is just trying to be TOO proactive and TOO helpful, perhaps like some people are some of the time.
I would prefer slightly more patient AI ... AI without the emotional needs.
Having my AI students do a thought experiment, reverse engineering such software, will be a neat exercise in my AI class, I think, because while this software is really, really annoying, and just plain messed up, it (or its developers) is a lot more interesting and instructive in its neediness than I first appreciated!
Thursday, July 25, 2013
Sunday, July 21, 2013
Playing with Pictures
I originally posted the following on my now extinguished Woodpress blog in
August 2011, and used the story in my Fall 2011 AI class as an example of problem solving that we'd eventually like computers to do.
---
During a driving tour of the Midwest in July that Pat and I made in our new Honda Fit, I was continually posting pictures on Facebook in a vacation album. Less than midway through I exhausted the 200 picture limit per album and was tempted to start new albums, one for each day or two, but a 200 picture limit is plenty I thought, and I liked the idea of using the constraint to prune out all but my favorites and pictures that were not thematically redundant; I also constrained myself to keep those already receiving a thumbs up or comment, etc. After the trip I was invited onto Google+ by Russ and Mary Lou, and Google (Picasa) has no album limit that I can tell, so there are 750+ pictures there! (https://picasaweb.google.com/106374191437655932029/SummerVacation2011 ) Talk about a lack of discipline.
A very cool functionality is that I can locate these pictures on Google maps, using any of the modalities – maps, satellite view, street view, or Earth. In locations with sufficient resolution I could place the photo right on the spot I was standing when I took the picture, though in some cases there appears to be some drift from the location I placed it when I look back. There didn’t appear to be a way to specify orientation of the photo – what direction I was facing when I took it, but I am guessing someone will do that in the near future. In any case, it’s very cool.
Since I was placing the pictures a couple of weeks after the trip, there were different heuristics I used to place them – sometimes it was straightforward – a particular highway junction, or something otherwise named like a school or a cemetery or a mountain peak on Google maps. The order in which pictures were taken offered some constraints, since having located one picture narrowed the possible locations for the next, but frankly, I have a good memory for such things as events and sequences. In one case though, even with some known restricted area stemming from sequencing information, I was trying to locate a picture in the tiny town of Scribner, Nebraska, but I saw no way to identify the precise location of a picture I had taken of an old church or the like, with a steeple (attached below). I was in the satellite view, maximum resolution, struggling to see some identifying visual cues, but the steeple itself was impossible to make out from a direct overhead view, …, and then I saw the SHADOW of the steeple in the satellite image !!! Amazing!! I’m attaching that image to this note. That was just neat.
I had so much fun that I created a couple of other albums on Picasa. One of these was from my trip to Copenhagen in 2009, while at NSF (https://picasaweb.google.com/106374191437655932029/Copenhagen2009 ; http://doughfisher.blogspot.com/2013/07/copenhagen-2009.html). I had taken a redeye from Dulles in DC to Copenhagen, arriving about 7 or 8 AM the day BEFORE the conference would start in a port city some ways away. I never sleep on planes, not even redeye flights, so I was pretty trashed when I arrived, but how often do you I go to Copenhagen! (OK, I’d been there is 2008 as well). So before taking the train to the Helsingor, I walked around Copenhagen. Even though it had been more than two years previously, I remembered the sequence well and was able to place the pictures in the same way I had for our recent vacation, including confirming the exact location of a picture of a statue from its shadow! Statue and shadow attached.
What was even more striking in this case than our recent vacation was the affect of reliving that walk as I placed the pictures – I saw the images; remembered roughly the walking sequence, using cues from Google views to fill in a few gaps and otherwise reduce uncertainties. A good friend had died not long before that trip, and that had been on my mind during the stroll of Copenhagen, and a hint of that emotion in the form of reflection came back.
I had so much fun doing the picture placement that I’m trying to think about how to formalize the activity as a project for my artificial intelligence class this coming semester. There is also a good human-computer interface problem here – in many cases I could only approximately place the pictures (e.g., highway shots on our driving vacation) and representing the variable uncertainty associated with physical location would be desirable.
I am writing this note on a whim – I watched the last installment of Ken Burns’ National Parks, and was remembering my Boy Scout days of backpacking through places like Tuolumne Meadows in Yosemite and Mt Whitney in Sequoia National Park. I’ve got some great stories of the Colorado river trip and bears raiding our campground in Yosemite, but I think my favorite trip was Kings Canyon, which probably started outside Mammoth, but in any case, we hiked among some incredible lakes, most above timberline: Thousand islands, Emerald, Ruby, Garnet and Shadow lakes. There is nothing like hiking above timberline along a ridge when a wind hits your back -- really an amazing feeling.
My crispest memories are probably of Garnet Lake – it was amazing when I was a boy scout, and I returned in graduate school with friends Rogers and Pete. In any case, I looked for some pictures on the Web and found these: http://www-personal.umich.edu/~jensenl/visuals/album/2006/thousand/ . And many others of course. Scroll down – there are some very nice pictures here and I remember more than a few scenes – I could point out the little island in Garnet Lake I swam too and almost died (joking, sort of) – it was freezing! The places we cooked and washed. And I can probably place some of the pictures on the trail map.
This recent play with pictures suggests some possibilities with immersion into virtual worlds – the technology is pretty primitive now, but because its piggy backing on memory of real experience, the affect is quite powerful.
Scribner Steeple (above)
Scribner Google Map image, with shadow! (below)
Copenhagen Statue (above)
Copenhagen Google Map image, with shadow! (below)
August 2011, and used the story in my Fall 2011 AI class as an example of problem solving that we'd eventually like computers to do.
---
During a driving tour of the Midwest in July that Pat and I made in our new Honda Fit, I was continually posting pictures on Facebook in a vacation album. Less than midway through I exhausted the 200 picture limit per album and was tempted to start new albums, one for each day or two, but a 200 picture limit is plenty I thought, and I liked the idea of using the constraint to prune out all but my favorites and pictures that were not thematically redundant; I also constrained myself to keep those already receiving a thumbs up or comment, etc. After the trip I was invited onto Google+ by Russ and Mary Lou, and Google (Picasa) has no album limit that I can tell, so there are 750+ pictures there! (https://picasaweb.google.com/106374191437655932029/SummerVacation2011 ) Talk about a lack of discipline.
A very cool functionality is that I can locate these pictures on Google maps, using any of the modalities – maps, satellite view, street view, or Earth. In locations with sufficient resolution I could place the photo right on the spot I was standing when I took the picture, though in some cases there appears to be some drift from the location I placed it when I look back. There didn’t appear to be a way to specify orientation of the photo – what direction I was facing when I took it, but I am guessing someone will do that in the near future. In any case, it’s very cool.
Since I was placing the pictures a couple of weeks after the trip, there were different heuristics I used to place them – sometimes it was straightforward – a particular highway junction, or something otherwise named like a school or a cemetery or a mountain peak on Google maps. The order in which pictures were taken offered some constraints, since having located one picture narrowed the possible locations for the next, but frankly, I have a good memory for such things as events and sequences. In one case though, even with some known restricted area stemming from sequencing information, I was trying to locate a picture in the tiny town of Scribner, Nebraska, but I saw no way to identify the precise location of a picture I had taken of an old church or the like, with a steeple (attached below). I was in the satellite view, maximum resolution, struggling to see some identifying visual cues, but the steeple itself was impossible to make out from a direct overhead view, …, and then I saw the SHADOW of the steeple in the satellite image !!! Amazing!! I’m attaching that image to this note. That was just neat.
I had so much fun that I created a couple of other albums on Picasa. One of these was from my trip to Copenhagen in 2009, while at NSF (https://picasaweb.google.com/106374191437655932029/Copenhagen2009 ; http://doughfisher.blogspot.com/2013/07/copenhagen-2009.html). I had taken a redeye from Dulles in DC to Copenhagen, arriving about 7 or 8 AM the day BEFORE the conference would start in a port city some ways away. I never sleep on planes, not even redeye flights, so I was pretty trashed when I arrived, but how often do you I go to Copenhagen! (OK, I’d been there is 2008 as well). So before taking the train to the Helsingor, I walked around Copenhagen. Even though it had been more than two years previously, I remembered the sequence well and was able to place the pictures in the same way I had for our recent vacation, including confirming the exact location of a picture of a statue from its shadow! Statue and shadow attached.
What was even more striking in this case than our recent vacation was the affect of reliving that walk as I placed the pictures – I saw the images; remembered roughly the walking sequence, using cues from Google views to fill in a few gaps and otherwise reduce uncertainties. A good friend had died not long before that trip, and that had been on my mind during the stroll of Copenhagen, and a hint of that emotion in the form of reflection came back.
I had so much fun doing the picture placement that I’m trying to think about how to formalize the activity as a project for my artificial intelligence class this coming semester. There is also a good human-computer interface problem here – in many cases I could only approximately place the pictures (e.g., highway shots on our driving vacation) and representing the variable uncertainty associated with physical location would be desirable.
I am writing this note on a whim – I watched the last installment of Ken Burns’ National Parks, and was remembering my Boy Scout days of backpacking through places like Tuolumne Meadows in Yosemite and Mt Whitney in Sequoia National Park. I’ve got some great stories of the Colorado river trip and bears raiding our campground in Yosemite, but I think my favorite trip was Kings Canyon, which probably started outside Mammoth, but in any case, we hiked among some incredible lakes, most above timberline: Thousand islands, Emerald, Ruby, Garnet and Shadow lakes. There is nothing like hiking above timberline along a ridge when a wind hits your back -- really an amazing feeling.
My crispest memories are probably of Garnet Lake – it was amazing when I was a boy scout, and I returned in graduate school with friends Rogers and Pete. In any case, I looked for some pictures on the Web and found these: http://www-personal.umich.edu/~jensenl/visuals/album/2006/thousand/ . And many others of course. Scroll down – there are some very nice pictures here and I remember more than a few scenes – I could point out the little island in Garnet Lake I swam too and almost died (joking, sort of) – it was freezing! The places we cooked and washed. And I can probably place some of the pictures on the trail map.
This recent play with pictures suggests some possibilities with immersion into virtual worlds – the technology is pretty primitive now, but because its piggy backing on memory of real experience, the affect is quite powerful.
Scribner Steeple (above)
Scribner Google Map image, with shadow! (below)
Copenhagen Statue (above)
Copenhagen Google Map image, with shadow! (below)
Monday, July 15, 2013
Reusing other Instructor's Assignments ... not! (or ?)
I am in the Educational Advances in Artificial Intelligence (EAAI-13), and we just concluded a session on educational repositories, particularly online repositories of homework assignments. Repositories of educational resources is a topic near and dear to my heart, but at least in the case of repositories of homework assignments, there appears to be no, little, or at best weak anecdotal evidence that assignments are being reused. At a minimum, don't we want repositories to be "instrumented,", like my (and everyone's) YouTube channel(s), so I can see downloads, likes, dislikes, and more sophisticated measures of usage that are specific to homework assignments?
Its hard to know if a homework assignment that has been posted in a educational repository is actually used by another instructor, unless an instructor who has used it, gets back to me and tells me so. There is some work in thinking about how to do this. But there is also low hanging fruit. First, we can measure downloads, but beyond this, as an educational community can take a small step towards a scholarly culture surrounding education materials by designing licenses specific to this kind of content.
For example, a license for usage of educational content could require that the material can be used by others (e.g., following any of the principles of creative commons licenses: http://creativecommons.org/), but additionally require that the user report back on the usage to the author (typically, the copyright holder), whether the use is as is, or derivative.
I think that this would be an incredible help to evaluating the extent and manner of use of educational material, going well beyond measuring downloads, and ultimately of evaluating the utility of educational materials to the educational community.
Let's ask people about their use, through a license that requires report back (and nothing else), rather than simply depending of the ability of inference by machine methods.
Its hard to know if a homework assignment that has been posted in a educational repository is actually used by another instructor, unless an instructor who has used it, gets back to me and tells me so. There is some work in thinking about how to do this. But there is also low hanging fruit. First, we can measure downloads, but beyond this, as an educational community can take a small step towards a scholarly culture surrounding education materials by designing licenses specific to this kind of content.
For example, a license for usage of educational content could require that the material can be used by others (e.g., following any of the principles of creative commons licenses: http://creativecommons.org/), but additionally require that the user report back on the usage to the author (typically, the copyright holder), whether the use is as is, or derivative.
I think that this would be an incredible help to evaluating the extent and manner of use of educational material, going well beyond measuring downloads, and ultimately of evaluating the utility of educational materials to the educational community.
Let's ask people about their use, through a license that requires report back (and nothing else), rather than simply depending of the ability of inference by machine methods.
Thursday, May 23, 2013
The Pumping Lemma
This was originally posted in October 2008 while I was at the National Science Foundation (NSF) in Arlington, Virginia -- the 12th floor of NSF was the big room, but this post is primarily on the wonders of computer science.
-----
My talk on "the 12th floor" yesterday went well, and I'm here on the second floor of the Arlington library, looking out on a beautiful day ... maybe a matinee later, and DC tomorrow.
I've been remembering the Pumping Lemma, a little gem from computer science (http://en.wikipedia.org/wiki/Pumping_lemma). The Pumping Lemma says that for certain languages that have an infinite number of possible sentences, there is a fragment (e.g., "big") of a sentence that is 'long' enough ("You are making a big mistake"), which you can repeat an arbitrary number of times (or exclude in some cases), and you will still have a (legal) sentence -- so if "big" is the fragment, then "You are making a mistake" is a sentence (excluding "big" from the original), and "You are making a big, big mistake" and You are making a big, big, big, mistake" are sentences (if you know enough to note the comma, you know enough to know the fix; and I don't think that English is one of the "certain" languages for which the Pumping lemma covers "in all cases", but it's ok). More generally, if I've got an adjective in a sentence, I could repeat that adjective or any other for that matter an arbitrary number of times, and still end up with a (legal) sentence.
What does this have to do with anything? I first saw the pumping lemma(s) about age 21, my first or second quarter at UC Irvine, after transferring schools (UCSC, USNA) and changing majors twice before, having my heart broken a few times, etc :-) When I saw these lemmas and their proofs I was struck by the formal beauty of them, but I also saw them as descriptive metaphors of my life to date -- you can repeat the some mistakes over and over, but still have hope of completing a sentence -- I'm serious, I was wow'd by it, as well as by the metaphorical significance of other gems in computing and mathematics.
There are some parts of a partial sentence, of course, that you can't repeat arbitrarily and have hope of completing a sentence; for example, you can't write "You are making a a " and have any hope of completing the sentence so it's legal -- once you repeat that 'a' in the way written, you end up in a 'dead' state. And you can of course keep repeating the "big" and never get to the final "mistake" before you "run out of time", and since "You are making a big, big,..., big" isn't a legal sentence, again, its a "dead" state -- both of these examples have formal interpretation. And of course, the repetition (or pumping) need not be metaphorical of mistakes, but successes, and a lot of in between.
The pumping lemmas are just concerned with syntax and not semantics -- you can understand and appreciate a string such as "Made you a mistake big" or even "You are making a big, ...., big", but I'm retaining hope for a sentence, and a compound, rich sentence at that.
Thank goodness for the pumping lemmas -- I've reflected on their lessons since age 21 :-). Hallelujah.
Sunday, March 24, 2013
Goldbach's Conjecture, Turing Machines, and Artificial Intelligence
When I was a graduate student I'd work on proving Goldbach's
Conjecture when I needed a break from my real research. I'd focus on
what this Wikipedia article (http://en.wikipedia.org/wiki/Goldbach's_conjecture)
calls the strong form : every even natural number (aka even positive
integer) greater than 5 can be expressed as the sum of two prime
numbers. So, for example, 6 = 3 + 3, 8 = 5 + 3, 10 = 5 + 5 (and 7 + 3),
12 = 7 + 5, .... Again, this is a conjecture that is believed to be true
by virtually everone and its truth has been demonstrated with computers
up to huge even numbers, but no one has proved its truth for all even
numbers, and there are an infinity of them.
The really attractive thing about number theory is that so many of the problems are so easy to understand by so many -- you may not be able to solve the problem, but you sure understand what's being asked! An approach I hit upon to prove Goldbach's conjecture (or I suppose disprove it, or perhaps that you could'nt prove it one way or the other!) was essentially this, write a computer program that ran forever (if you were to run it), generating the even natural numbers one after the other, and write another computer program that ran forever (again, only if you were to actually run it), that generated all the sums of two primes "in sequence", and then show that the two programs were equivalent. Unfortunately, that last step is REALLY, REALLY hard, if doable at all, but fortunately my PhD research took off about this time and I did that instead, much to the relief of my wife, parents, and in-laws!
But now, just as I want my artificial intelligence students to find projects of interest, this is the project that I want to return to. Its been about 3 years since I've done my own substantive computer programming, and its probably been 15 years since I've done substantive programming in the LISP language. So this will be fun! I can trivially write a program that generates all even natural numbers greater than 5: (defun GenEven () (do ((i 3 (+ 1 i))) (t (princ (* 2 i))))). A program that generates the sum of all pairs of primes is a good deal more complicated, because in general each addend needs to be verified as prime (http://en.wikipedia.org/wiki/Prime_number). In fact, one way to write this second program is simply to write a program that generates all prime numbers, and then "append" it to a copy of itself, and as each copy produces a prime the sum is output. However we write the second, what we imagine is something remarkable -- that the latter very complicated program is equivalent to the former very simple program.
It would be tempting to spend a good deal of time making each of these programs as concise or as efficient as possible, but you see, I am never going to run either program. If I am biased in any direction it is that each program be as "unstructured" and as "primitive" as possible, because once these programs are defined, a third program, an AI program, is going to search for a sequence of rewrites that will transform one program into the other, while provably maintaining the original functionality of each. The third (AI) program is the one that will actually be run, and I'll be writing this program in Lisp. But the two programs, one for generating the even numbers and one for generating the sums of prime pairs, I'm imagining will be written in the most primitive of languages -- the language for programming (or defining) a Turing Machine -- a simple form of computer, but not a computer that you would ever power up -- a Turing Machine is strictly a theoretical device (http://en.wikipedia.org/wiki/Turing_machine).
The reason for the bias of starting with as unstructured and primitive as programs as possible is that though there are optimizations in the test for primality, for example, which I could reflect in my initial programs, these optimizations reflect patterns that almost certainly have been exploited in explorations of Goldbach’s conjecture by better minds than mine. It may be that any proof, if one is possible, has to rely on reasoning that is just off (human-conceived) map.
I'd actually started this process as a grad, exploring the ways to bridge these two programs, via an AI program that searched through billions of possible rewrites. I'm essentially an experimentalist and I start with code and looking for data -- that's my bread and butter. I think that what I am really doing is shaping my retirement 20 years from now (or less, for Pete's sake). When friends visit and ask Pat where I am, she'll point to the shed and tell them that I'm working on "that proof". More likely, I’ll be tinkering with the AI program itself, making sure that there are no bugs in it — can you imagine my despair, if near the end of my life and after searching billions of rewrites, my program comes back with “Proof Found!”, and I didn’t correctly save the path my AI program took to get there!?
The older I get the more I remind myself of my father.
(originally posted Thursday, August 20, 2009 on Wordpress)
The really attractive thing about number theory is that so many of the problems are so easy to understand by so many -- you may not be able to solve the problem, but you sure understand what's being asked! An approach I hit upon to prove Goldbach's conjecture (or I suppose disprove it, or perhaps that you could'nt prove it one way or the other!) was essentially this, write a computer program that ran forever (if you were to run it), generating the even natural numbers one after the other, and write another computer program that ran forever (again, only if you were to actually run it), that generated all the sums of two primes "in sequence", and then show that the two programs were equivalent. Unfortunately, that last step is REALLY, REALLY hard, if doable at all, but fortunately my PhD research took off about this time and I did that instead, much to the relief of my wife, parents, and in-laws!
But now, just as I want my artificial intelligence students to find projects of interest, this is the project that I want to return to. Its been about 3 years since I've done my own substantive computer programming, and its probably been 15 years since I've done substantive programming in the LISP language. So this will be fun! I can trivially write a program that generates all even natural numbers greater than 5: (defun GenEven () (do ((i 3 (+ 1 i))) (t (princ (* 2 i))))). A program that generates the sum of all pairs of primes is a good deal more complicated, because in general each addend needs to be verified as prime (http://en.wikipedia.org/wiki/Prime_number). In fact, one way to write this second program is simply to write a program that generates all prime numbers, and then "append" it to a copy of itself, and as each copy produces a prime the sum is output. However we write the second, what we imagine is something remarkable -- that the latter very complicated program is equivalent to the former very simple program.
It would be tempting to spend a good deal of time making each of these programs as concise or as efficient as possible, but you see, I am never going to run either program. If I am biased in any direction it is that each program be as "unstructured" and as "primitive" as possible, because once these programs are defined, a third program, an AI program, is going to search for a sequence of rewrites that will transform one program into the other, while provably maintaining the original functionality of each. The third (AI) program is the one that will actually be run, and I'll be writing this program in Lisp. But the two programs, one for generating the even numbers and one for generating the sums of prime pairs, I'm imagining will be written in the most primitive of languages -- the language for programming (or defining) a Turing Machine -- a simple form of computer, but not a computer that you would ever power up -- a Turing Machine is strictly a theoretical device (http://en.wikipedia.org/wiki/Turing_machine).
The reason for the bias of starting with as unstructured and primitive as programs as possible is that though there are optimizations in the test for primality, for example, which I could reflect in my initial programs, these optimizations reflect patterns that almost certainly have been exploited in explorations of Goldbach’s conjecture by better minds than mine. It may be that any proof, if one is possible, has to rely on reasoning that is just off (human-conceived) map.
I'd actually started this process as a grad, exploring the ways to bridge these two programs, via an AI program that searched through billions of possible rewrites. I'm essentially an experimentalist and I start with code and looking for data -- that's my bread and butter. I think that what I am really doing is shaping my retirement 20 years from now (or less, for Pete's sake). When friends visit and ask Pat where I am, she'll point to the shed and tell them that I'm working on "that proof". More likely, I’ll be tinkering with the AI program itself, making sure that there are no bugs in it — can you imagine my despair, if near the end of my life and after searching billions of rewrites, my program comes back with “Proof Found!”, and I didn’t correctly save the path my AI program took to get there!?
The older I get the more I remind myself of my father.
(originally posted Thursday, August 20, 2009 on Wordpress)
Thursday, January 3, 2013
Pedagogical benefits of bottlenecks
Back in the day of my initial computer programming classes we used punch cards (http://en.wikipedia.org/wiki/Punched_card). One card held one line of computer code, so that a 100-line program, which is a small program, required a deck of 100 cards.
To prepare and run my program required I get in a line for the next available key punch machine -- I typed my program out on a keyboard and coded holes were punched into cards. I might have to wait 30 minutes for a free machine on the night before an assignment was due, then taking another 15-30 minutes to punch my cards. When I was done, I went to another line, with a 5-10 min wait, and ran my deck through the mainframe-computer's card reader -- it flipped through the card deck like a card shark flips through playing cards, reading the holes in the cards as they sped by. After this, my program waited in the computer's internal queue until its turn came, about 20 minutes, and the computer "ran" (aka executed) my program, printing out the results *IF* my program "worked" (but, sigh :-( even then the results or OUTPUT was often wrong, a result of semantic or "runtime" errors), or the "line" printer (a massive thing that printed on large rolls of paper) would print out my program, flagging syntax errors that had prevented my program from running at all. If this all seems like a drag, it wasn't really -- the night before an assignment was due was a party -- computer science was probably the most social major on campus. More recently than back-in-the-day, personal laptops have come to dominate and students often work in their dorm rooms (sigh), but I hope that computer science education is as social as it once was, albeit in different forms.
The simple physical operations that I had to perform to correct and run my program and get the results back was about (45min+5min+20min =) 70 minutes on a busy night!! Before I got back in that line for the key punch machine, I rolled out my printout and studied it for at least 30-60 minutes, maybe longer, maybe much longer -- if I found the bugs that appeared to be the problem, I didn't stop there, but studied the entire program looking for more, because there surely were more bugs and its usually the case that the "real bug" is NOT in the vicinity of its manifestation. No professor or textbook berated me to take a global view of the code, to go beyond the immediate symptoms and look for causes -- it was the time bottleneck, the 70 minute response time, that encouraged, even demanded extensive thought on my part. In writing even a several-hundred line program, I actually ran the program only a handful of times.
Since back-in-the-day, programming development environments have gotten much better and computer response times have decreased drastically. A student can make a change to a program, hit "run", and have the results of the run back before they've finished blinking, at least for the programs of complexity that novice-to-intermediate students will run (in contrast, while in grad school and not that long ago as a faculty member, I wrote and ran programs that might take a week).
But lesser response times (i.e., faster) and friendlier programming environments are not all good news -- not for novice programmers trying to become expert anyways, though they might think otherwise. Unfortunately, it seems that the decrease in response time is accompanied by a decrease in thought time. The removal of a time bottleneck encourages a local change, hit run to see what happens, change, run, change, run, change, run ... anyways, this is my experience as an instructor. A student might (try to) run the program a hundred times using this knee-jerk debugging strategy, and because the strategy focuses on local changes, not benefiting from reorganizations that stem from a global view, the code is far less elegant and more brittle.
Among the experienced programmer, fast response is a godsend, but its a bane to the novice programmer in training, whether the student knows it or not.
I want to know whether this correlation between computer response time and programmer think time is really true, particularly among novices. And I'm very concerned with what analog correlations exist with other technologies and the influence of such correlations with respect to human sustainability.
When I have the time, so wonderful to think about (having time), I'll be contemplating bottlenecks, how they promote the long view, the global view, particularly as they relate to computing and sustainability. I like the idea of bottlenecks that actively teach and reason with you, even as they slow you down -- another note.
BTW -- one of the greats in CS, Edsger Dijkstra, went so far as to suggest that the new CS programmer shouldn't be able to access a computer for a year, so I recall. You ought to be able to write correct code for even complicated tasks without getting feedback from a computer at all -- amazing, but I believe it.
(originally posted on Wordpress blog: May 20, 2009)
Wednesday, January 2, 2013
Update on MOOCs in support of blended courses
In October 2012 I gave several talks on my experiences with using MOOCs (or simply using material from MOOCs) in my regular Vanderbilt courses (e.g., http://vimeo.com/53361649, with my presentation starting at about 26:40, speaking from slides at https://my.vanderbilt.edu/douglasfisher/files/2012/02/ITHAKA-Presentation-10-16-12.pdf); a text summary of my experience up to that time can be found on the Chronicle of Higher Education's ProfHacker blog at http://chronicle.com/blogs/profhacker/warming-up-to-moocs/44022.
As part of each of these presentations (ITHAKA S+R, UNCC, CUMU-12), I illustrated the breadth of computer science course offerings online with 4 slides (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ), showing that one could come close to fulfilling the course requirements of a typical CS major online and free. I was also presenting this material from the perspective of an instructor, so my focus was on how instructors could add to online content, allowing others still to customize, drawing from expanding online content.
Since these presentations and the ProfHacker post, my graduate "Individual Studies" course (CS 390) in Fall 2012 on Machine Learning has finished; this course was a "wrapper" around Andrew Ng's COURSERA (Stanford) course. The requirements of the CS 390 included the requirements (quizzes, homeworks) of the COURSERA course. There were 10 graduate students who completed the CS 390 course. You can look at the details of the course organization, roughly a cross between a more structured upper division undergrad class (the COURSERA component) and a graduate seminar course (the face-to-face component), at https://my.vanderbilt.edu/cs390fall2012/, should you wish. Rather than feeling more like a TA (it has been suggested by some that faculty roles might morph into glorified TAs under a blended model), I felt LESS like a TA, and more like an "old-school" prof (I suppose as portrayed in 1940s and 50s movies :-), interacting closely with students in class. But there are models of blended learning besides the one that I used, and probably better fits to different preferences.
The CS 390 OVERALL RATINGs (as coded on Vanderbilt's forms, each on a 1...5 range, 5 being "best") for the instructor/course (4.16/4.16) were comparable to the "regular" Spring 2012 Machine Learning (CS 362) offering (4.33/4.22) and my other Fall 2012 courses of CS 360 (4.50/4.25) and CS 260 (4.25/4.00 ). Derek Bruff and others at Vanderbilt/s Center for Teaching did a mid semester evaluation, we have identified ways to improve, and we are working on further evaluation. On the whole it seems that the wrapper was a well appreciated course.
Despite this CS 390's more structured (COURSERA) component, the CS 390 required no more than 1/4 the time as my upper-division AI course (CS 260), probably considerably less than that, because the COURSERA platform was doing work of lectures and grading. A very specific consequence of my CS 390 experience is that I may advocate offering CS 362 (Machine Learning) yearly (instead of every other year), if it can be done as a wrapper (all or some of the time). Coincidentally, there is at least one other ML MOOC coming online soon, enabling more customization across the two MOOCs, to say nothing of the material that I will put up (e.g., on YouTube), as I have done for CS 260; in fact, I have a couple of "best sellers", which resulted from students of yet another AI MOOC looking for clarification on a couple of algorithms (i.e., generalized arc consistency and iterative deepening): https://www.youtube.com/channel/UCWOFdpEfNuQP3O_JUiwhT8A.
More generally, most commentaries point out that reduced faculty workload per course (nonetheless, regarded by students and faculty as strong courses!!!) would enable more high-quality electives for a fixed staff level, and it would allow a finer granularity in assessing faculty workload; finer granularity in characterizing course workload could translate into finer granularity in course buyouts, enabling even a heavily committed research faculty member to lead a (blended) course, etc. I haven't seen the increased flexibility of faculty buyouts mentioned in previous commentaries, but I suspect that some of the most grant-committed faculty would like to get in front of a class of undergrads if it just didn't take so much time, and of course, I'm sure undergrads would love this too.
A GRADUATE-LEVEL INDIVIDUAL STUDIES course, such as my CS 390, seemed like the most conservative next step into formal blended learning courses, but the online CS offerings through COURSERA particularly (but EdX and Udacity too, and to include "opensource" initiatives), are large, giving lots of opportunities for blended courses -- again, recalling the "CS major online" (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ). And because CS enrollments are busting at the seams, CS is perhaps an ideal program to be designing and vetting blended learning courses.
My initial thoughts are that "standard format" ("the way" we have always done it) might remain ideal for core classes (we don't want our faculty's skills to atrophy!!!), as well as for electives that correspond to the primary expertise of instructors, but offering electives that blended courses, for which there is limited existing faculty expertise, but for which there would be student (and faculty) interest. My limited experience is that students would like and appreciate this stretch; and I, for one, would love to learn with a group of students in an area that I was not expert in; faculty member as "lead learner" is only one blended model, albeit much different than the CS390 model I have experience with.
As part of each of these presentations (ITHAKA S+R, UNCC, CUMU-12), I illustrated the breadth of computer science course offerings online with 4 slides (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ), showing that one could come close to fulfilling the course requirements of a typical CS major online and free. I was also presenting this material from the perspective of an instructor, so my focus was on how instructors could add to online content, allowing others still to customize, drawing from expanding online content.
Since these presentations and the ProfHacker post, my graduate "Individual Studies" course (CS 390) in Fall 2012 on Machine Learning has finished; this course was a "wrapper" around Andrew Ng's COURSERA (Stanford) course. The requirements of the CS 390 included the requirements (quizzes, homeworks) of the COURSERA course. There were 10 graduate students who completed the CS 390 course. You can look at the details of the course organization, roughly a cross between a more structured upper division undergrad class (the COURSERA component) and a graduate seminar course (the face-to-face component), at https://my.vanderbilt.edu/cs390fall2012/, should you wish. Rather than feeling more like a TA (it has been suggested by some that faculty roles might morph into glorified TAs under a blended model), I felt LESS like a TA, and more like an "old-school" prof (I suppose as portrayed in 1940s and 50s movies :-), interacting closely with students in class. But there are models of blended learning besides the one that I used, and probably better fits to different preferences.
The CS 390 OVERALL RATINGs (as coded on Vanderbilt's forms, each on a 1...5 range, 5 being "best") for the instructor/course (4.16/4.16) were comparable to the "regular" Spring 2012 Machine Learning (CS 362) offering (4.33/4.22) and my other Fall 2012 courses of CS 360 (4.50/4.25) and CS 260 (4.25/4.00 ). Derek Bruff and others at Vanderbilt/s Center for Teaching did a mid semester evaluation, we have identified ways to improve, and we are working on further evaluation. On the whole it seems that the wrapper was a well appreciated course.
Despite this CS 390's more structured (COURSERA) component, the CS 390 required no more than 1/4 the time as my upper-division AI course (CS 260), probably considerably less than that, because the COURSERA platform was doing work of lectures and grading. A very specific consequence of my CS 390 experience is that I may advocate offering CS 362 (Machine Learning) yearly (instead of every other year), if it can be done as a wrapper (all or some of the time). Coincidentally, there is at least one other ML MOOC coming online soon, enabling more customization across the two MOOCs, to say nothing of the material that I will put up (e.g., on YouTube), as I have done for CS 260; in fact, I have a couple of "best sellers", which resulted from students of yet another AI MOOC looking for clarification on a couple of algorithms (i.e., generalized arc consistency and iterative deepening): https://www.youtube.com/channel/UCWOFdpEfNuQP3O_JUiwhT8A.
More generally, most commentaries point out that reduced faculty workload per course (nonetheless, regarded by students and faculty as strong courses!!!) would enable more high-quality electives for a fixed staff level, and it would allow a finer granularity in assessing faculty workload; finer granularity in characterizing course workload could translate into finer granularity in course buyouts, enabling even a heavily committed research faculty member to lead a (blended) course, etc. I haven't seen the increased flexibility of faculty buyouts mentioned in previous commentaries, but I suspect that some of the most grant-committed faculty would like to get in front of a class of undergrads if it just didn't take so much time, and of course, I'm sure undergrads would love this too.
A GRADUATE-LEVEL INDIVIDUAL STUDIES course, such as my CS 390, seemed like the most conservative next step into formal blended learning courses, but the online CS offerings through COURSERA particularly (but EdX and Udacity too, and to include "opensource" initiatives), are large, giving lots of opportunities for blended courses -- again, recalling the "CS major online" (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ). And because CS enrollments are busting at the seams, CS is perhaps an ideal program to be designing and vetting blended learning courses.
My initial thoughts are that "standard format" ("the way" we have always done it) might remain ideal for core classes (we don't want our faculty's skills to atrophy!!!), as well as for electives that correspond to the primary expertise of instructors, but offering electives that blended courses, for which there is limited existing faculty expertise, but for which there would be student (and faculty) interest. My limited experience is that students would like and appreciate this stretch; and I, for one, would love to learn with a group of students in an area that I was not expert in; faculty member as "lead learner" is only one blended model, albeit much different than the CS390 model I have experience with.
Subscribe to:
Posts (Atom)