Home> Blog> The Ladder of Abstraction and the Future of Online Teaching

The Ladder of Abstraction and the Future of Online Teaching

ยท

This post originally appeared on the Software Carpentry website.

Up and Down the Ladder of Abstraction is one of the most thought-provoking things to hit the web in a long time. Its author, Bret Victor, doesn't just talk about the design process—he shows us what a great interactive tutorial ought to look like. (For a shorter, simpler, but equally inspiring example, have a look at the home page for his Tangle project, and ask yourself what learning would be like if learners could play with every diagram and quantitative statement in their "textbooks" that way...)

Examples like these, and reflection on things I've learned by following people like Mark Guzdial and Audrey Watters, have made me realize that there's a big difference between online teaching and online learning. Unfortunately, Software Carpentry has focused on the former rather than the latter—on presenting content, rather than on how (and how much) people actually learn. Partly, this is because the former is easier, since I have control over the notes, the videos, and so on. Partly too, though, it reflects an academic culture in which professors focus on lecturing, rather than on changing students' understanding of the world. (We've all had students who got B's, or even A's, without really understanding the course material...) And partly, I've focused on production rather than consumption because the latter is very hard to assess. Even when we're teaching this stuff in person, as we're doing right now in Toronto, it's very difficult to get a handle on how much students have actually absorbed.

For example, suppose we're teaching Python (which we are), and one of the exercises is to read a bunch of numbers from a file and print their mean. Ignoring floating-point issues, there's only one right answer, but that doesn't mean we can say, "If your program prints 6, then you understand loops, file I/O, and string-to-number conversions." What we'll actually see is people hacking and tweaking their code, more or less at random, until voila, a 6 pops out and they're done. Their programs will be littered with unused variables, five-stage assignments like:

a = 5
b = a
c = b
print c

and so on (what Katy Huff called cargo cult programming). We will have taught, and students will have produced the right answer for one specific case, but in many cases, they won't have learned.

Now, you'd think this would be easy to check: give them a similar problem, and see if they can transfer their knowledge. But that's not going to work, because they can hack and tweak their way to an accidentally-correct answer to the second problem just as they did for the first. We could time them, on the assumption that if they've learned, they'll solve the second problem faster than the first, and the third faster than the second, but all that's going to do (at best) is identify the people who aren't learning; it isn't going to tell us why they aren't, or what they don't understand, which in turn means that we won't know what to explain to them to clear things up.

And that's the real problem with many production-oriented approaches to online learning, from Software Carpentry to the Khan Academy. To paraphrase Tolstoy, successful learners are all alike; every unsuccessful learner is unsuccessful in their own way. There's only one correct mental model of how regular expressions work, but there are dozens or hundreds of ways to misunderstand them, and each one requires a different corrective explanation. What's worse, as we shift from knowing that to knowing how—from memorizing the multiplication table to solving rope-and-pulley dynamics problems or using the Unix shell—the space of possible misconceptions grows very, very quickly, and with it, the difficulty of diagnosing and correcting misunderstandings.

In the long run, we may be able to develop expert systems (or cognitive tutors) to help with some of these issues. Right now, though, I think the only option is to keep a human mentor in the loop. I only have to look over someone's shoulder for a couple of minutes to see whether they've understood a lesson or not; watching how they produce an answer tells me more about their learning than the answer itself. Using today's tools, it would be relatively easy to have students record screencasts of themselves solving simple programming exercises and submit those along with their source code. However, I suspect most learners would be uncomfortable doing this, as it would feel very Big Brother-ish.

I'd welcome your thoughts on all of this: on how we can shift Software Carpentry's focus from teaching to learning, and on how to assess the latter so that we can tell what's working and what isn't. After all, we're here to learn too...