Uncovering ‘What if?’ and ‘Why?’ in the A.I. era

Artificial intelligence, which has been extensively developed in the last few decades, cares about the power of a machine to copy intelligent human behavior.  As humans, we make decisions every day that rely on the cause and effects of our actions. For example, we know if we work out at the gym it will cause the number of calories we burn to go up. However, the implications this may have on our overall health is more difficult to address. This boils down to the difference between two statistical concepts: correlation and causation.

  • Correlation: measures the relationship between two things.
  • Causation: means that one thing will cause the other to happen.

The distinctions between the two can have important implications. In the website, “Spurious Correlations” by Tyler Vigen, you can explore a wide variety of correlations that are due to chance.  One of my favorites can be seen in Figure 1, which illustrates the correlation between math doctorates awarded and the Uranium stored at United States nuclear power plants. While these two variables have a correlation of 95.23%, it is highly unrealistic to think my degree caused an increase in the amount of Uranium stored in the United States.

Chart that illustrates how math doctorates awarded correlates with Uranium stored at US nuclear power plants over time.

Figure 1: Example of a spurious correlation by Tyler Vigen.

When we think of causality we want to prove that there is a direct relationship between two variables. This can be harder than expected since, as the famous phrase goes, “correlation doesn’t imply causation”. One of the first examples I encountered  as a student was based on the question: Do storks deliver babies? Many parents may wish the answer was yes to avoid explaining where babies come from to their kids. While the number of storks and human births exhibit a positive correlation (see “Stork Deliver Babies”  by Robert Matthews), again this is not true.

I like this simple example that Adam Kelleher uses in his article “If Correlation Doesn’t Imply Causation, Then What Does?”. Think of your daily commute, if your alarm doesn’t go off or there is traffic, you will be late for work. There are many events on your morning routine that could also make you late for work (i.e. traffic is fine but you spilled coffee on your way or your alarm goes off and you oversleep). We think of all this as noise and as the author mentions, “it takes care of the host of “what-if” questions that come up from all of the unlikely exceptions we haven’t taken into account”.

In the new era of big data, how do we discover the underlying relationships between big datasets and which relationships can we trust? “The Book of Why” by Judea Pearl and Dana Mackenzie, which was recently reviewed in the Notices of the AMS by Dr. Lisa R. Goldberg, tackles the question of how we can use the theory of causality to model and interpret data. In the review, the concept of “The Ladder of Causality” is summarized nicely:

“The bottom rung is for model-free statistical methods that rely strictly on association or correlation. The middle rung is for interventions that allow for the measurement of cause and effect. The top rung is for counterfactual analysis, the exploration of alternative realities.”

Figure 2: Illustration of the ladder of causality in “The Book of Why” by Judea Pearl and Dana Mackenzie.

To achieve intelligence, Pearl proposes a machine’s reasoning should move through the ladder illustrated in Figure 2.  Machines should move from seeing associations in data to doing and planning interventions to obtain a desired outcome, towards  becoming counterfactual learners  that can imagine what does not exist yet and infer from observed data. As mentioned by Pearl in an interview by Kevin Harnett from Quanta Magazine,

“If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.”

Reaching the top of the ladder of causality may still be out of our grasp. As our understanding progresses, I would love to see how we integrate the qualitative and quantitative aspects of building a world around data. As Andrew Gelman points out in his review of the book,

“If you think you’re working with a purely qualitative model, it turns out that, no, you’re actually making lots of data-based quantitative decisions about which effects and interactions you decide are real and which ones you decide are not there. And if you think you’re working with a purely quantitative model, no, you’re really making lots of assumptions (causal or otherwise) about how your data connect to reality.”

In my perspective, as humans, we are only able to imagine when we consider both. For machines to answer “what-if?” and “why?” they must do so as well.

About Vanessa Rivera-Quinones

Mathematics Ph.D. with a passion for telling stories through numbers using mathematical models, data science, science communication, and education. Follow her on Twitter: @VRiveraQPhD.
This entry was posted in Artificial Intelligence, Data Science, Statistics. Bookmark the permalink.