Dr. Bastian Rieck is a senior assistant in the Machine Learning & Computational Biology Lab of Prof. Dr. Karsten Borgwardt at ETH Zürich. He is interested in understanding complex data sets and topology-based machine learning methods in biomedical contexts. Especially, those related to developing tools for personalized medicine.
In his blog, which has been active since 2006, he shares his musings on interesting topics related to programming, his research interests and projects, and many “how-to” posts. What I like about this blog is that it has a very nice balance between sharing the experience of being an academic, providing advice for other researches, and diving into topics related to machine learning and programming. In this tour, I’ll give you a glimpse of some of his most recent posts.
The Power of Admitting Ignorance
In this post, Rieck shares the story of his experience as an undergrad taking an advanced mathematics course. He describes what I feel many of us have experienced at some point in our careers in which you wonder how your knowledge stacks up with that of your peers. In this class, he found himself in awe of his peers which seemed to understand all the concepts quickly even when they were introduced. This led to feeling increasingly out of place. He then recalls how his professor, by being honest about his limited knowledge on a subject, really changed his perspective. In a footnote, he even highlights how this particular interaction became a changing point in his career!
“There is a power in being as honest and outspoken as Prof. Kreck was. Here is this proficient and prolific member of THEM, and he could have just made up something on the spot to make me feel dumb. Instead, he chose the intellectually honest option, and made it clear that this is the normal state of affairs in mathematics (or any sufficiently complicated topic). I relish the fact that such a small action could have such a profound impact on one person, and I am grateful that I dared pose my question.
In the years since, in my own dealings with researchers, I never once feigned knowledge when I was not feeling sufficiently confident about it. I think it is important to be honest about what you know and what you do not know. Ignorance is not a moral blemish—pretending to be smarter than you are is (just as choosing to remain in a state of ignorance is).
So the moral of this story is: do not be afraid of not knowing or not understanding something.”
Similarly, I appreciated his honesty in describing this experience. It made me reflect on similar instances in my career and how, by being vulnerable when we don’t understand something, we can humanize ourselves to our students and peers.
Machine Learning Needs a Langlands Programme
This post caught my attention with the beautiful illustration of ‘The Land of Middle Math” (see Figure 1) by Prof. Dr. Franka Miriam Brückler. In this post, he argues that machine learning as an ever growing-field would benefit from having a structure of communicating among its different branches. Especially, since this can be a difficult task even though the branches share commonalities. He discusses some solutions including creating something similar to the Langlands Programme, which aims to study the connections between number theory and geometry. I love his analogy where he describes the program as the ‘Rosetta Stone’ for mathematics.
“The individual branches of mathematics are represented as different columns on the stone. Each statement and each theorem have their counterpart in another domain. The beauty of this is that, if I have a certain problem that I cannot solve in one domain, I just translate it to another one! André Weil discussed this analogy in a letter to his sister, and his work is a fascinating example of using parts of the mathematical Rosetta Stone to prove theorems.”
He argues, that the main benefit of a program like this would be to make as many connections among results in different fields to avoid in a sense over specializing in the tools that as researchers are created.
“The classical way of writing a machine learning paper is to present a novel solution to a specific problem. We want to say ‘Look, we are able to do things now that we could not do before!’, such as the aforementioned learning on sets. This is highly relevant, but we must not forget that we should also look at how our novel approach is connected to the field. Does it maybe permit generalising statements? Does it shed some light on a problem that was poorly understood before? If we never explore the links, we risk making ourselves into toolmakers with too many bits and pieces. Looking for the general instead of the specific is the key to avoid this—and this is why machine learning needs its own version of the Langlands programme. It does not have to be so ambitious or far-reaching, but it should be a motivation for us to investigate outside our respective niche.”
In this post, Rieck highlights how similar the choices designers make in creating an installation script for a program, researchers who develop packages also make are. In particular, the dangers of providing misleading parameters or defaults to users.
“It dawned on me at some point that we, i.e. researchers that develop a software package in addition to their research, are doing precisely the same thing. We create a software tool for solving a certain problem. It might be an itch that we want to scratch, or it might be software that is related to our research—in the end, we all write some code in some language to produce some kind of value. How often do we think about the dangers of the API that we are exposing, though?”
I found this post super helpful in talking to my students in my machine learning class about important considerations when training a model. Many machine learning models are implemented in the Python library scikit-learn and come with a set of defaults that when misunderstood or misused could lead you to draw incorrect conclusions. For example, he discusses that by default when training a Logistic Regression model, one may choose to alter how the algorithm changes the model to improve its performance on a new data point by using a technique called regularization. However, applying this technique to the data should be the user’s choice and could affect the reproducibility of results.
“In the worst case, it might trick users into believing that they did not employ regularisation when in fact they did: when comparing to other methods in a publication, it is common practice to report the parameters that one selected for a classifier. A somewhat hidden assumption on the model can be very problematic for the reproducibility of a paper.”
He ends by discussing the benefits of having parameter defaults (and that by no means they should be removed!) and provides tips on how to address setting default parameters for complex algorithms.
Do you have suggestions of topics or blogs you would like us to consider covering in upcoming posts? Resources to share? Reach out to us in the comments below or let us know on Twitter (@MissVRiveraQ)