Deepmind, famously known for creating the computer programs AlphaGo and Alpha Zero, features a blog that showcases their current research efforts in artificial intelligence (AI). Their more recent posts include: “How evolutionary selection can train more capable self-driving cars” by Yu-hsin Chen, “Using AI to give doctors a 48-hour head start on life-threatening illness” by Mustafa Suleyman and Dominic King, and “Using machine learning to accelerate ecological research” by Stig Petersen, Meredith Palmer, Ulrich Paquet, and Pushmeet Kohli. While the breadth of the blogs cover a lot of topics, I was extremely excited to see the launch of the DeepMind podcast hosted by Hannah Fry (@fryrsquared) author of a number of books including “Hello World: How to be human in the age of the machine”, “The Indisputable Existence of Santa Claus”, and “The Mathematics of Love”.
Why make a podcast? As mentioned on their website,
“Put simply, we love the convenience and format. We thought podcasts were a great option for a series about AI because they allow nuanced discussion and lets listeners hear directly from the people doing the work.”
This podcast is aimed at people who are curious about AI but may not have the technical background. In this eight-part series, listeners get an inside look from researchers themselves to the challenges the field of AI is tackling today. Curious? See the trailer below.
What I enjoyed most about the podcasts were the many analogies used to explain how AI connects to human experiences and other fields. Also, each 30 minute episode includes notes/resources to learn more about the topics covered. Here, I summarize my top three episodes.
“AI and Neuroscience: The Virtuous Circle”
How do we define intelligence? Jess Hamrick mentions that the debate centers on two camps: should we create AI that is smarter than humans or as intelligent as humans? Matt Botvinick describes how neuroactivity suggests that human brains learn by replaying memories and a very similar idea has a place in AI research. For example, an AI can beat Atari games such as Space Invaders, mainly by learning from the previous games played and maximizing its rewards. Also, human abilities such as liking memories to each other, using mental simulations, and adapting to new situations, give AI a better capacity for solving problems. By studying AI and neuroscience together we can create a virtuous circle where knowledge in the fields flows between one another.
Interviewees: Deepmind CEO and co-founder, Demis Hassabis; Matt Botvinick, Director of Neuroscience Research; research scientists Jess Hamrick and Greg Wayne; and Director of Research Koray Kavukcuoglu.
Can we use AI to solve real-world problems outside the lab? Pearse Keane discusses how as the number of patients increases there is a growing challenge in diagnosing urgent and common conditions such as age-related macular degeneration (AMD), which can lead to blindness, accurately and quickly. Using AI could promote the early detection and treatment of diseases. Sandy Nelson explores what can AI tell us about proteins, which play a role in many neurodegenerative diseases like Alzheimer’s. Proteins fold in on themselves (in about $10^{300}$ ways!) and their shapes are of great interest to scientists. AI can find clues to reduce the number of shapes being considered for a particular problem. Finally, Sims Witherspoon describes how our use of technology, which relies on data centers, has a great energy demand. For example, data centers consume 3% of the world’s energy. We can ask AI to tell us how to adjust dials in data centers to reduce energy use.
Interviewees: Pearse Keane, consultant ophthalmologist at Moorfields Eye Hospital; Sandy Nelson, Product Manager for DeepMind’s Science Program; and DeepMind Program Manager Sims Witherspoon.
How can we ethically implement, develop, and use AI? One of the concerns is that Verity Harding mentions is that AI could be used in different ways than intended. If AI is transformative in a good way, it can also be transformative in a negative way. Lila Ibrahim makes the point that there is a lot of responsibility when building technology, especially now that it is available to more people. For example, when using AI in the criminal justice system to reduce inconsistencies among rulings, one must tread lightly and account for racial prejudice and bias in both data and algorithmic implementation. William Isaac highlights that algorithms are not necessarily more objective than humans thus we still have to grapple with ethical questions. Along with Silvia Chiappa, both point out how difficult it is to define and technically measure fairness. Thus, interrogating data and involving more voices in the conversation is and will be crucial to making sure we build a world that belongs to all.
Interviewees: Verity Harding, Co-Lead of DeepMind Ethics and Society; DeepMind’s COO Lila Ibrahim, and research scientists William Isaac and Silvia Chiappa.
Other podcasts include: “Go to Zero”, “Life is like a game”, “AI, Robot”, “Towards the future”, and “Demis Hassabis – The interview”. Overall, it was a great way to make AI research more accessible!
Do you have suggestions of topics you would like us to consider covering in upcoming posts? Reach out to us in the comments below or let us know on Twitter (@MissVRiveraQ).