Thursday, January 3, 2013

Pedagogical benefits of bottlenecks


Back in the day of my initial computer programming classes we used punch cards (http://en.wikipedia.org/wiki/Punched_card). One card held one line of computer code, so that a 100-line program, which is a small program, required a deck of 100 cards.

To prepare and run my program required I get in a line for the next available key punch machine -- I typed my program out on a keyboard and coded holes were punched into cards. I might have to wait 30 minutes for a free machine on the night before an assignment was due, then taking another 15-30 minutes to punch my cards. When I was done, I went to another line, with a 5-10 min wait, and ran my deck through the mainframe-computer's card reader -- it flipped through the card deck like a card shark flips through playing cards, reading the holes in the cards as they sped by. After this, my program waited in the computer's internal queue until its turn came, about 20 minutes, and the computer "ran" (aka executed) my program, printing out the results *IF* my program "worked" (but, sigh :-( even then the results or OUTPUT was often wrong, a result of semantic or "runtime" errors), or the "line" printer (a massive thing that printed on large rolls of paper) would print out my program, flagging syntax errors that had prevented my program from running at all. If this all seems like a drag, it wasn't really -- the night before an assignment was due was a party -- computer science was probably the most social major on campus. More recently than back-in-the-day, personal laptops have come to dominate and students often work in their dorm rooms (sigh), but I hope that computer science education is as social as it once was, albeit in different forms.

The simple physical operations that I had to perform to correct and run my program and get the results back was about (45min+5min+20min =) 70 minutes on a busy night!! Before I got back in that line for the key punch machine, I rolled out my printout and studied it for at least 30-60 minutes, maybe longer, maybe much longer -- if I found the bugs that appeared to be the problem, I didn't stop there, but studied the entire program looking for more, because there surely were more bugs and its usually the case that the "real bug" is NOT in the vicinity of its manifestation. No professor or textbook berated me to take a global view of the code, to go beyond the immediate symptoms and look for causes -- it was the time bottleneck, the 70 minute response time, that encouraged, even demanded extensive thought on my part. In writing even a several-hundred line program, I actually ran the program only a handful of times.

Since back-in-the-day, programming development environments have gotten much better and computer response times have decreased drastically. A student can make a change to a program, hit "run", and have the results of the run back before they've finished blinking, at least for the programs of complexity that novice-to-intermediate students will run (in contrast, while in grad school and not that long ago as a faculty member, I wrote and ran programs that might take a week).

But lesser response times (i.e., faster) and friendlier programming environments are not all good news -- not for novice programmers trying to become expert anyways, though they might think otherwise. Unfortunately, it seems that the decrease in response time is accompanied by a decrease in thought time. The removal of a time bottleneck encourages a local change, hit run to see what happens, change, run, change, run, change, run ... anyways, this is my experience as an instructor. A student might (try to) run the program a hundred times using this knee-jerk debugging strategy, and because the strategy focuses on local changes, not benefiting from reorganizations that stem from a global view, the code is far less elegant and more brittle.

Among the experienced programmer, fast response is a godsend, but its a bane to the novice programmer in training, whether the student knows it or not.

I want to know whether this correlation between computer response time and programmer think time is really true, particularly among novices. And I'm very concerned with what analog correlations exist with other technologies and the influence of such correlations with respect to human sustainability.

When I have the time, so wonderful to think about (having time), I'll be contemplating bottlenecks, how they promote the long view, the global view, particularly as they relate to computing and sustainability. I like the idea of bottlenecks that actively teach and reason with you, even as they slow you down -- another note.

BTW -- one of the greats in CS, Edsger Dijkstra, went so far as to suggest that the new CS programmer shouldn't be able to access a computer for a year, so I recall. You ought to be able to write correct code for even complicated tasks without getting feedback from a computer at all -- amazing, but I believe it.

(originally posted on Wordpress blog: May 20, 2009)

Wednesday, January 2, 2013

Update on MOOCs in support of blended courses

In October 2012 I gave several talks on my experiences with using MOOCs (or simply using material from MOOCs) in my regular Vanderbilt courses (e.g., http://vimeo.com/53361649, with my presentation starting at about 26:40, speaking from slides at https://my.vanderbilt.edu/douglasfisher/files/2012/02/ITHAKA-Presentation-10-16-12.pdf); a text summary of my experience up to that time can be found on the Chronicle of Higher Education's ProfHacker blog at http://chronicle.com/blogs/profhacker/warming-up-to-moocs/44022.

As part of each of these presentations (ITHAKA S+R, UNCC, CUMU-12), I illustrated the breadth of computer science course offerings online with 4 slides (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ), showing that one could come close to fulfilling the course requirements of a typical CS major online and free. I was also presenting this material from the perspective of an instructor, so my focus was on how instructors could add to online content, allowing others still to customize, drawing from expanding online content.

Since these presentations and the ProfHacker post, my graduate "Individual Studies" course (CS 390) in Fall 2012 on Machine Learning has finished; this course was a "wrapper" around Andrew Ng's  COURSERA (Stanford) course. The requirements of the CS 390 included the requirements (quizzes, homeworks) of the COURSERA course. There were 10 graduate students who completed the CS 390 course. You can look at the details of the course organization, roughly a cross between a more structured upper division undergrad class (the COURSERA component) and a graduate seminar course (the face-to-face component), at https://my.vanderbilt.edu/cs390fall2012/, should you wish. Rather than feeling more like a TA (it has been suggested by some that faculty roles might morph into glorified TAs under a blended model), I felt LESS like a TA, and more like an "old-school" prof (I suppose as portrayed in 1940s and 50s movies :-), interacting closely with students in class. But there are models of blended learning besides the one that I used, and probably better fits to different preferences.

The CS 390 OVERALL RATINGs (as coded on Vanderbilt's forms, each on a 1...5 range, 5 being "best") for the instructor/course (4.16/4.16) were comparable to the "regular" Spring 2012 Machine Learning (CS 362) offering (4.33/4.22) and my other Fall 2012 courses of CS 360 (4.50/4.25) and CS 260 (4.25/4.00 ). Derek Bruff and others at Vanderbilt/s Center for Teaching did a mid semester evaluation, we have identified ways to improve, and we are working on further evaluation. On the whole it seems that the wrapper was a well appreciated course.

Despite this CS 390's more structured (COURSERA) component, the CS 390 required no more than 1/4 the time as my upper-division AI course (CS 260), probably considerably less than that, because the COURSERA platform was doing work of lectures and grading. A very specific consequence of my CS 390 experience is that I may advocate offering CS 362 (Machine Learning) yearly (instead of every other year), if it can be done as a wrapper (all or some of the time). Coincidentally, there is at least one other ML MOOC coming online soon, enabling more customization across the two MOOCs, to say nothing of the material that I will put up (e.g., on YouTube), as I have done for CS 260; in fact, I have a couple of "best sellers", which resulted from students of yet another AI MOOC looking for clarification on a couple of algorithms (i.e., generalized arc consistency and iterative deepening): https://www.youtube.com/channel/UCWOFdpEfNuQP3O_JUiwhT8A.

More generally, most commentaries point out that reduced faculty workload per course (nonetheless, regarded by students and faculty as strong courses!!!) would enable more high-quality electives for a fixed staff level, and it would allow a finer granularity in assessing faculty workload; finer granularity in characterizing course workload could translate into finer granularity in course buyouts, enabling even a heavily committed research faculty member to lead a (blended) course, etc. I haven't seen the increased flexibility of faculty buyouts mentioned in previous commentaries, but I suspect that some of the most grant-committed faculty would like to get in front of a class of undergrads if it just didn't take so much time, and of course, I'm sure undergrads would love this too.

A GRADUATE-LEVEL INDIVIDUAL STUDIES course, such as my CS 390, seemed like the most conservative next step into formal blended learning courses, but the online CS offerings through COURSERA particularly (but EdX and Udacity too, and to include "opensource" initiatives), are large, giving lots of opportunities for blended courses --  again, recalling the "CS major online" (https://my.vanderbilt.edu/douglasfisher/files/2012/02/CSMajorOnline.pdf ). And because CS enrollments are busting at the seams, CS is perhaps an ideal program to be designing and vetting blended learning courses.

My initial thoughts are that "standard format" ("the way" we have always done it) might remain ideal for core classes (we don't want our faculty's skills to atrophy!!!), as well as for electives that correspond to the primary expertise of instructors, but offering electives that blended courses, for which there is limited existing faculty expertise, but for which there would be student (and faculty) interest. My limited experience is that students would like and appreciate this stretch; and I, for one, would love to learn with a group of students in an area that I was not expert in; faculty member as "lead learner" is only one blended model, albeit much different than the CS390 model I have experience with.