When I first started teaching intro statistics during my postdoc, someone recommended the book Teaching What You Don’t Know, by Therese Huston.
Full confession: like pretty much every professional development book anyone has ever recommended to me, I haven’t read it. Maybe this summer? I’ll be teaching our senior seminar on math history in the fall, and I could use some advice.
I didn’t read the book, but I did take Carolyn Cuff’s fantastic MAA Minicourse on teaching intro statistics, and I muddled through just fine. But it took a lot of prep, and there were more than a couple times I didn’t feel confident answering my students’ questions. By about my third time through, everything came together pretty well, and now I love teaching stats.
This semester, another department was in a bind. A professor had a major health issue and was going to be out for several weeks. They’d covered all his classes but one: a machine learning course for bioinformatics students. All their leads for substitutes had come up dry, and they asked if the math department knew somebody.
And because I still don’t know how to keep my mouth shut and want to help people out, I said that I knew a bit about machine learning and could probably fill in for a few weeks.
Oh, and the course is in R, a language I’ve played around with, but never for long enough to get really fluent, and even that was a couple years ago. But hey, I’ve been meaning to get better with R. No better way than immersion, right? It’ll be fun!
That was the end of February. The original instructor is recovering, but won’t be back until after the end of the term. So I’ve spent the last two months teaching a topic I understand in the abstract but haven’t used much in real life to students who don’t know much abstract math but will have to apply it anyway.
Y’know, phrased like that it doesn’t seem too different from most classes I’ve ever taught.
I’ve spent an insane amount of work preparing for this course. Luckily one of the textbooks already chosen for the course, Machine Learning with R by Brett Lantz, is useful and well written. It features lots of tutorials that let premade packages do all the heavy lifting, so the students can focus on data preprocessing and interpreting the results. I’ve also revisited the Coursera Machine Learning course by Andrew Ng to refresh my memory and give my students a little more mathematical depth than the book provides, without overwhelming them with linear algebra. And I’ve found a bunch of great online tutorials: Markov models and hidden Markov models for DNA sequence generation, how to turn R into MATLAB, and how to predict who would survive the Titanic disaster. And I’ve finally gotten a chance to play around with resources like the UCI Machine Learning Repository and Kaggle.
I’ve enjoyed this new challenge in my teaching. This semester is my 10th anniversary of teaching at the college level, and it was refreshing to have a brand new topic that keeps me on my toes and won’t let me even try to phone it in. And I think I’ll even use this new breadth of knowledge to do some research. A colleague in the humanities here wants to analyze a massive poetry data set, but he doesn’t have the technical background to implement anything fancy. My school values interdisciplinary work, so this project will benefit both of our cvs, in addition to just being fun and interesting. And of course the extra money’s nice, though being paid at the per-course rate is a needed reminder of just how dire life can be in the adjuncting world.
Even with all those upsides, I’m not sure if it’s been worth it. This was supposed to be a pretty light semester for me, where I thought I’d finally get some decent research done. I haven’t so much as opened a manuscript since February. And it’s been a ton of extra stress and a return of all those old imposter syndrome feelings, just when I was starting to shake it.
So maybe the biggest thing I’ve learned this semester is that sometimes I really should just keep my mouth shut.